diff --git a/README.textile b/README.textile index f48e40c2eaf..63f1841822c 100644 --- a/README.textile +++ b/README.textile @@ -42,7 +42,7 @@ h3. Installation * "Download":https://www.elastic.co/downloads/elasticsearch and unzip the Elasticsearch official distribution. * Run @bin/elasticsearch@ on unix, or @bin\elasticsearch.bat@ on windows. -* Run @curl -X GET http://127.0.0.1:9200/@. +* Run @curl -X GET http://localhost:9200/@. * Start more servers ... h3. Indexing @@ -50,16 +50,16 @@ h3. Indexing Let's try and index some twitter like information. First, let's create a twitter user, and add some tweets (the @twitter@ index will be created automatically):
-curl -XPUT 'http://127.0.0.1:9200/twitter/user/kimchy' -d '{ "name" : "Shay Banon" }' +curl -XPUT 'http://localhost:9200/twitter/user/kimchy' -d '{ "name" : "Shay Banon" }' -curl -XPUT 'http://127.0.0.1:9200/twitter/tweet/1' -d ' +curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d ' { "user": "kimchy", "postDate": "2009-11-15T13:12:00", "message": "Trying out Elasticsearch, so far so good?" }' -curl -XPUT 'http://127.0.0.1:9200/twitter/tweet/2' -d ' +curl -XPUT 'http://localhost:9200/twitter/tweet/2' -d ' { "user": "kimchy", "postDate": "2009-11-15T14:12:12", @@ -70,9 +70,9 @@ curl -XPUT 'http://127.0.0.1:9200/twitter/tweet/2' -d ' Now, let's see if the information was added by GETting it:-curl -XGET 'http://127.0.0.1:9200/twitter/user/kimchy?pretty=true' -curl -XGET 'http://127.0.0.1:9200/twitter/tweet/1?pretty=true' -curl -XGET 'http://127.0.0.1:9200/twitter/tweet/2?pretty=true' +curl -XGET 'http://localhost:9200/twitter/user/kimchy?pretty=true' +curl -XGET 'http://localhost:9200/twitter/tweet/1?pretty=true' +curl -XGET 'http://localhost:9200/twitter/tweet/2?pretty=true'h3. Searching @@ -81,13 +81,13 @@ Mmm search..., shouldn't it be elastic? Let's find all the tweets that @kimchy@ posted:-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/_search?q=user:kimchy&pretty=true' +curl -XGET 'http://localhost:9200/twitter/tweet/_search?q=user:kimchy&pretty=true'We can also use the JSON query language Elasticsearch provides instead of a query string:-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/_search?pretty=true' -d ' +curl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty=true' -d ' { "query" : { "match" : { "user": "kimchy" } @@ -98,7 +98,7 @@ curl -XGET 'http://127.0.0.1:9200/twitter/tweet/_search?pretty=true' -d ' Just for kicks, let's get all the documents stored (we should see the user as well):-curl -XGET 'http://127.0.0.1:9200/twitter/_search?pretty=true' -d ' +curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d ' { "query" : { "matchAll" : {} @@ -109,7 +109,7 @@ curl -XGET 'http://127.0.0.1:9200/twitter/_search?pretty=true' -d ' We can also do range search (the @postDate@ was automatically identified as date)-curl -XGET 'http://127.0.0.1:9200/twitter/_search?pretty=true' -d ' +curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d ' { "query" : { "range" : { @@ -130,16 +130,16 @@ Elasticsearch supports multiple indices, as well as multiple types per index. In Another way to define our simple twitter system is to have a different index per user (note, though that each index has an overhead). Here is the indexing curl's in this case:-curl -XPUT 'http://127.0.0.1:9200/kimchy/info/1' -d '{ "name" : "Shay Banon" }' +curl -XPUT 'http://localhost:9200/kimchy/info/1' -d '{ "name" : "Shay Banon" }' -curl -XPUT 'http://127.0.0.1:9200/kimchy/tweet/1' -d ' +curl -XPUT 'http://localhost:9200/kimchy/tweet/1' -d ' { "user": "kimchy", "postDate": "2009-11-15T13:12:00", "message": "Trying out Elasticsearch, so far so good?" }' -curl -XPUT 'http://127.0.0.1:9200/kimchy/tweet/2' -d ' +curl -XPUT 'http://localhost:9200/kimchy/tweet/2' -d ' { "user": "kimchy", "postDate": "2009-11-15T14:12:12", @@ -152,7 +152,7 @@ The above will index information into the @kimchy@ index, with two types, @info@ Complete control on the index level is allowed. As an example, in the above case, we would want to change from the default 5 shards with 1 replica per index, to only 1 shard with 1 replica per index (== per twitter user). Here is how this can be done (the configuration can be in yaml as well):-curl -XPUT http://127.0.0.1:9200/another_user/ -d ' +curl -XPUT http://localhost:9200/another_user/ -d ' { "index" : { "numberOfShards" : 1, @@ -165,7 +165,7 @@ Search (and similar operations) are multi index aware. This means that we can ea index (twitter user), for example:-curl -XGET 'http://127.0.0.1:9200/kimchy,another_user/_search?pretty=true' -d ' +curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -d ' { "query" : { "matchAll" : {} @@ -176,7 +176,7 @@ curl -XGET 'http://127.0.0.1:9200/kimchy,another_user/_search?pretty=true' -d ' Or on all the indices:-curl -XGET 'http://127.0.0.1:9200/_search?pretty=true' -d ' +curl -XGET 'http://localhost:9200/_search?pretty=true' -d ' { "query" : { "matchAll" : {} diff --git a/TESTING.asciidoc b/TESTING.asciidoc index f35d32d9d9f..13ceef4bdd7 100644 --- a/TESTING.asciidoc +++ b/TESTING.asciidoc @@ -321,11 +321,13 @@ Vagrant. You can get started by following there five easy steps: . (Optional) Install vagrant-cachier to squeeze a bit more performance out of the process: + -------------------------------------- vagrant plugin install vagrant-cachier -------------------------------------- . Validate your installed dependencies: + ------------------------------------- mvn -Dtests.vagrant -pl qa/vagrant validate ------------------------------------- @@ -334,11 +336,14 @@ mvn -Dtests.vagrant -pl qa/vagrant validate from Vagrant when you run it inside mvn its probably best if you run this one time to setup all the VMs one at a time. Run this to download and setup the VMs we use for testing by default: + -------------------------------------------------------- vagrant up --provision trusty && vagrant halt trusty vagrant up --provision centos-7 && vagrant halt centos-7 -------------------------------------------------------- + or run this to download and setup all the VMs: + ------------------------------------------------------------------------------- vagrant halt for box in $(vagrant status | grep 'poweroff\|not created' | cut -f1 -d' '); do @@ -349,24 +354,32 @@ done . Smoke test the maven/ant dance that we use to get vagrant involved in integration testing is working: + --------------------------------------------- mvn -Dtests.vagrant -Psmoke-vms -pl qa/vagrant verify --------------------------------------------- + or this to validate all the VMs: + ------------------------------------------------- mvn -Dtests.vagrant=all -Psmoke-vms -pl qa/vagrant verify ------------------------------------------------- + That will start up the VMs and then immediate quit. . Finally run the tests. The fastest way to get this started is to run: + ----------------------------------- mvn clean install -DskipTests mvn -Dtests.vagrant -pl qa/vagrant verify ----------------------------------- + You could just run: + -------------------- mvn -Dtests.vagrant verify -------------------- + but that will run all the tests. Which is probably a good thing, but not always what you want. @@ -379,39 +392,51 @@ packaging and SyvVinit and systemd. You can control the boxes that are used for testing like so. Run just fedora-22 with: + -------------------------------------------- mvn -Dtests.vagrant -pl qa/vagrant verify -DboxesToTest=fedora-22 -------------------------------------------- + or run wheezy and trusty: + ------------------------------------------------------------------ mvn -Dtests.vagrant -pl qa/vagrant verify -DboxesToTest='wheezy, trusty' ------------------------------------------------------------------ + or run all the boxes: + --------------------------------------- mvn -Dtests.vagrant=all -pl qa/vagrant verify --------------------------------------- Its important to know that if you ctrl-c any of these `mvn` runs that you'll probably leave a VM up. You can terminate it by running: + ------------ vagrant halt ------------ This is just regular vagrant so you can run normal multi box vagrant commands to test things manually. Just run: + --------------------------------------- vagrant up trusty && vagrant ssh trusty --------------------------------------- + to get an Ubuntu or + ------------------------------------------- vagrant up centos-7 && vagrant ssh centos-7 ------------------------------------------- + to get a CentOS. Once you are done with them you should halt them: + ------------------- vagrant halt trusty ------------------- These are the linux flavors the Vagrantfile currently supports: + * precise aka Ubuntu 12.04 * trusty aka Ubuntu 14.04 * vivid aka Ubuntun 15.04 @@ -424,23 +449,29 @@ These are the linux flavors the Vagrantfile currently supports: We're missing the following from the support matrix because there aren't high quality boxes available in vagrant atlas: + * sles-11 * sles-12 * opensuse-13 * oel-6 We're missing the follow because our tests are very linux/bash centric: + * Windows Server 2012 Its important to think of VMs like cattle: if they become lame you just shoot them and let vagrant reprovision them. Say you've hosed your precise VM: + ---------------------------------------------------- vagrant ssh precise -c 'sudo rm -rf /bin'; echo oops ---------------------------------------------------- + All you've got to do to get another one is + ---------------------------------------------- vagrant destroy -f trusty && vagrant up trusty ---------------------------------------------- + The whole process takes a minute and a half on a modern laptop, two and a half without vagrant-cachier. @@ -450,14 +481,17 @@ around it: https://github.com/mitchellh/vagrant/issues/4479 Some vagrant commands will work on all VMs at once: + ------------------ vagrant halt vagrant destroy -f ------------------ + ---------- vagrant up ---------- + would normally start all the VMs but we've prevented that because that'd consume a ton of ram. @@ -466,10 +500,13 @@ consume a ton of ram. In general its best to stick to testing in vagrant because the bats scripts are destructive. When working with a single package its generally faster to run its tests in a tighter loop than maven provides. In one window: + -------------------------------- mvn -pl distribution/rpm package -------------------------------- + and in another window: + ---------------------------------------------------- vagrant up centos-7 && vagrant ssh centos-7 cd $RPM @@ -477,6 +514,7 @@ sudo bats $BATS/*rpm*.bats ---------------------------------------------------- If you wanted to retest all the release artifacts on a single VM you could: + ------------------------------------------------- # Build all the distributions fresh but skip recompiling elasticsearch: mvn -amd -pl distribution install -DskipTests diff --git a/Vagrantfile b/Vagrantfile index c6698725c2a..3938bb0b36f 100644 --- a/Vagrantfile +++ b/Vagrantfile @@ -22,32 +22,32 @@ # under the License. Vagrant.configure(2) do |config| - config.vm.define "precise", autostart: false do |config| + config.vm.define "precise" do |config| config.vm.box = "ubuntu/precise64" ubuntu_common config end - config.vm.define "trusty", autostart: false do |config| + config.vm.define "trusty" do |config| config.vm.box = "ubuntu/trusty64" ubuntu_common config end - config.vm.define "vivid", autostart: false do |config| + config.vm.define "vivid" do |config| config.vm.box = "ubuntu/vivid64" ubuntu_common config end - config.vm.define "wheezy", autostart: false do |config| + config.vm.define "wheezy" do |config| config.vm.box = "debian/wheezy64" deb_common(config) end - config.vm.define "jessie", autostart: false do |config| + config.vm.define "jessie" do |config| config.vm.box = "debian/jessie64" deb_common(config) end - config.vm.define "centos-6", autostart: false do |config| + config.vm.define "centos-6" do |config| # TODO switch from chef to boxcutter to provide? config.vm.box = "chef/centos-6.6" rpm_common(config) end - config.vm.define "centos-7", autostart: false do |config| + config.vm.define "centos-7" do |config| # There is a centos/7 box but it doesn't have rsync or virtualbox guest # stuff on there so its slow to use. So chef it is.... # TODO switch from chef to boxcutter to provide? @@ -59,11 +59,11 @@ Vagrant.configure(2) do |config| # config.vm.box = "boxcutter/oel66" # rpm_common(config) # end - config.vm.define "oel-7", autostart: false do |config| + config.vm.define "oel-7" do |config| config.vm.box = "boxcutter/oel70" rpm_common(config) end - config.vm.define "fedora-22", autostart: false do |config| + config.vm.define "fedora-22" do |config| # Fedora hosts their own 'cloud' images that aren't in Vagrant's Atlas but # and are missing required stuff like rsync. It'd be nice if we could use # them but they much slower to get up and running then the boxcutter image. @@ -75,6 +75,33 @@ Vagrant.configure(2) do |config| # the elasticsearch project called vagrant.... config.vm.synced_folder ".", "/vagrant", disabled: true config.vm.synced_folder "", "/elasticsearch" + if Vagrant.has_plugin?("vagrant-cachier") + config.cache.scope = :box + end + config.vm.defined_vms.each do |name, config| + config.options[:autostart] = false + set_prompt = lambda do |config| + # Sets up a consistent prompt for all users. Or tries to. The VM might + # contain overrides for root and vagrant but this attempts to work around + # them by re-source-ing the standard prompt file. + config.vm.provision "prompt", type: "shell", inline: <<-SHELL + cat \<\/etc/profile.d/elasticsearch_prompt.sh +export PS1='#{name}:\\w$ ' +PROMPT + grep 'source /etc/profile.d/elasticsearch_prompt.sh' ~/.bashrc | + cat \<\ > ~/.bashrc +# Replace the standard prompt with a consistent one +source /etc/profile.d/elasticsearch_prompt.sh +SOURCE_PROMPT + grep 'source /etc/profile.d/elasticsearch_prompt.sh' ~vagrant/.bashrc | + cat \<\ > ~vagrant/.bashrc +# Replace the standard prompt with a consistent one +source /etc/profile.d/elasticsearch_prompt.sh +SOURCE_PROMPT + SHELL + end + config.config_procs.push ['2', set_prompt] + end end def ubuntu_common(config) @@ -90,24 +117,17 @@ end def deb_common(config) provision(config, "apt-get update", "/var/cache/apt/archives/last_update", "apt-get install -y", "openjdk-7-jdk") - if Vagrant.has_plugin?("vagrant-cachier") - config.cache.scope = :box - end end def rpm_common(config) provision(config, "yum check-update", "/var/cache/yum/last_update", "yum install -y", "java-1.7.0-openjdk-devel") - if Vagrant.has_plugin?("vagrant-cachier") - config.cache.scope = :box - end end def dnf_common(config) provision(config, "dnf check-update", "/var/cache/dnf/last_update", "dnf install -y", "java-1.8.0-openjdk-devel") if Vagrant.has_plugin?("vagrant-cachier") - config.cache.scope = :box # Autodetect doesn't work.... config.cache.auto_detect = false config.cache.enable :generic, { :cache_dir => "/var/cache/dnf" } @@ -116,7 +136,7 @@ end def provision(config, update_command, update_tracking_file, install_command, java_package) - config.vm.provision "elasticsearch bats dependencies", type: "shell", inline: <<-SHELL + config.vm.provision "bats dependencies", type: "shell", inline: <<-SHELL set -e installed() { command -v $1 2>&1 >/dev/null @@ -150,7 +170,7 @@ export TAR=/elasticsearch/distribution/tar/target/releases export RPM=/elasticsearch/distribution/rpm/target/releases export DEB=/elasticsearch/distribution/deb/target/releases export TESTROOT=/elasticsearch/qa/vagrant/target/testroot -export BATS=/elasticsearch/qa/vagrant/src/test/resources/packaging/scripts/ +export BATS=/elasticsearch/qa/vagrant/src/test/resources/packaging/scripts VARS SHELL end diff --git a/core/README.textile b/core/README.textile index b2873e8b56e..720f357406b 100644 --- a/core/README.textile +++ b/core/README.textile @@ -42,7 +42,7 @@ h3. Installation * "Download":https://www.elastic.co/downloads/elasticsearch and unzip the Elasticsearch official distribution. * Run @bin/elasticsearch@ on unix, or @bin\elasticsearch.bat@ on windows. -* Run @curl -X GET http://127.0.0.1:9200/@. +* Run @curl -X GET http://localhost:9200/@. * Start more servers ... h3. Indexing @@ -50,16 +50,16 @@ h3. Indexing Let's try and index some twitter like information. First, let's create a twitter user, and add some tweets (the @twitter@ index will be created automatically): -curl -XPUT 'http://127.0.0.1:9200/twitter/user/kimchy' -d '{ "name" : "Shay Banon" }' +curl -XPUT 'http://localhost:9200/twitter/user/kimchy' -d '{ "name" : "Shay Banon" }' -curl -XPUT 'http://127.0.0.1:9200/twitter/tweet/1' -d ' +curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d ' { "user": "kimchy", "postDate": "2009-11-15T13:12:00", "message": "Trying out Elasticsearch, so far so good?" }' -curl -XPUT 'http://127.0.0.1:9200/twitter/tweet/2' -d ' +curl -XPUT 'http://localhost:9200/twitter/tweet/2' -d ' { "user": "kimchy", "postDate": "2009-11-15T14:12:12", @@ -70,9 +70,9 @@ curl -XPUT 'http://127.0.0.1:9200/twitter/tweet/2' -d ' Now, let's see if the information was added by GETting it:-curl -XGET 'http://127.0.0.1:9200/twitter/user/kimchy?pretty=true' -curl -XGET 'http://127.0.0.1:9200/twitter/tweet/1?pretty=true' -curl -XGET 'http://127.0.0.1:9200/twitter/tweet/2?pretty=true' +curl -XGET 'http://localhost:9200/twitter/user/kimchy?pretty=true' +curl -XGET 'http://localhost:9200/twitter/tweet/1?pretty=true' +curl -XGET 'http://localhost:9200/twitter/tweet/2?pretty=true'h3. Searching @@ -81,13 +81,13 @@ Mmm search..., shouldn't it be elastic? Let's find all the tweets that @kimchy@ posted:-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/_search?q=user:kimchy&pretty=true' +curl -XGET 'http://localhost:9200/twitter/tweet/_search?q=user:kimchy&pretty=true'We can also use the JSON query language Elasticsearch provides instead of a query string:-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/_search?pretty=true' -d ' +curl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty=true' -d ' { "query" : { "match" : { "user": "kimchy" } @@ -98,7 +98,7 @@ curl -XGET 'http://127.0.0.1:9200/twitter/tweet/_search?pretty=true' -d ' Just for kicks, let's get all the documents stored (we should see the user as well):-curl -XGET 'http://127.0.0.1:9200/twitter/_search?pretty=true' -d ' +curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d ' { "query" : { "matchAll" : {} @@ -109,7 +109,7 @@ curl -XGET 'http://127.0.0.1:9200/twitter/_search?pretty=true' -d ' We can also do range search (the @postDate@ was automatically identified as date)-curl -XGET 'http://127.0.0.1:9200/twitter/_search?pretty=true' -d ' +curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d ' { "query" : { "range" : { @@ -130,16 +130,16 @@ Elasticsearch supports multiple indices, as well as multiple types per index. In Another way to define our simple twitter system is to have a different index per user (note, though that each index has an overhead). Here is the indexing curl's in this case:-curl -XPUT 'http://127.0.0.1:9200/kimchy/info/1' -d '{ "name" : "Shay Banon" }' +curl -XPUT 'http://localhost:9200/kimchy/info/1' -d '{ "name" : "Shay Banon" }' -curl -XPUT 'http://127.0.0.1:9200/kimchy/tweet/1' -d ' +curl -XPUT 'http://localhost:9200/kimchy/tweet/1' -d ' { "user": "kimchy", "postDate": "2009-11-15T13:12:00", "message": "Trying out Elasticsearch, so far so good?" }' -curl -XPUT 'http://127.0.0.1:9200/kimchy/tweet/2' -d ' +curl -XPUT 'http://localhost:9200/kimchy/tweet/2' -d ' { "user": "kimchy", "postDate": "2009-11-15T14:12:12", @@ -152,7 +152,7 @@ The above will index information into the @kimchy@ index, with two types, @info@ Complete control on the index level is allowed. As an example, in the above case, we would want to change from the default 5 shards with 1 replica per index, to only 1 shard with 1 replica per index (== per twitter user). Here is how this can be done (the configuration can be in yaml as well):-curl -XPUT http://127.0.0.1:9200/another_user/ -d ' +curl -XPUT http://localhost:9200/another_user/ -d ' { "index" : { "numberOfShards" : 1, @@ -165,7 +165,7 @@ Search (and similar operations) are multi index aware. This means that we can ea index (twitter user), for example:-curl -XGET 'http://127.0.0.1:9200/kimchy,another_user/_search?pretty=true' -d ' +curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -d ' { "query" : { "matchAll" : {} @@ -176,7 +176,7 @@ curl -XGET 'http://127.0.0.1:9200/kimchy,another_user/_search?pretty=true' -d ' Or on all the indices:-curl -XGET 'http://127.0.0.1:9200/_search?pretty=true' -d ' +curl -XGET 'http://localhost:9200/_search?pretty=true' -d ' { "query" : { "matchAll" : {} diff --git a/core/pom.xml b/core/pom.xml index dbc1b022b71..5c0dd2555c6 100644 --- a/core/pom.xml +++ b/core/pom.xml @@ -5,15 +5,14 @@4.0.0 org.elasticsearch -elasticsearch-parent +parent 2.1.0-SNAPSHOT org.elasticsearch elasticsearch -jar -Elasticsearch Core +Elasticsearch: Core Elasticsearch - Open Source, Distributed, RESTful Search Engine diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/NodesInfoRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/NodesInfoRequest.java index d01167ceeca..46a36f1d8a3 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/NodesInfoRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/NodesInfoRequest.java @@ -35,7 +35,6 @@ public class NodesInfoRequest extends BaseNodesRequest { private boolean process = true; private boolean jvm = true; private boolean threadPool = true; - private boolean network = true; private boolean transport = true; private boolean http = true; private boolean plugins = true; @@ -60,7 +59,6 @@ public class NodesInfoRequest extends BaseNodesRequest { process = false; jvm = false; threadPool = false; - network = false; transport = false; http = false; plugins = false; @@ -76,7 +74,6 @@ public class NodesInfoRequest extends BaseNodesRequest { process = true; jvm = true; threadPool = true; - network = true; transport = true; http = true; plugins = true; @@ -158,21 +155,6 @@ public class NodesInfoRequest extends BaseNodesRequest { return this; } - /** - * Should the node Network be returned. - */ - public boolean network() { - return this.network; - } - - /** - * Should the node Network be returned. - */ - public NodesInfoRequest network(boolean network) { - this.network = network; - return this; - } - /** * Should the node Transport be returned. */ @@ -228,7 +210,6 @@ public class NodesInfoRequest extends BaseNodesRequest { process = in.readBoolean(); jvm = in.readBoolean(); threadPool = in.readBoolean(); - network = in.readBoolean(); transport = in.readBoolean(); http = in.readBoolean(); plugins = in.readBoolean(); @@ -242,7 +223,6 @@ public class NodesInfoRequest extends BaseNodesRequest { out.writeBoolean(process); out.writeBoolean(jvm); out.writeBoolean(threadPool); - out.writeBoolean(network); out.writeBoolean(transport); out.writeBoolean(http); out.writeBoolean(plugins); diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/NodesInfoRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/NodesInfoRequestBuilder.java index a1c5b405063..42d69794f08 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/NodesInfoRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/NodesInfoRequestBuilder.java @@ -87,14 +87,6 @@ public class NodesInfoRequestBuilder extends NodesOperationRequestBuilder { private boolean process; private boolean jvm; private boolean threadPool; - private boolean network; private boolean fs; private boolean transport; private boolean http; @@ -63,7 +62,6 @@ public class NodesStatsRequest extends BaseNodesRequest { this.process = true; this.jvm = true; this.threadPool = true; - this.network = true; this.fs = true; this.transport = true; this.http = true; @@ -81,7 +79,6 @@ public class NodesStatsRequest extends BaseNodesRequest { this.process = false; this.jvm = false; this.threadPool = false; - this.network = false; this.fs = false; this.transport = false; this.http = false; @@ -171,21 +168,6 @@ public class NodesStatsRequest extends BaseNodesRequest { return this; } - /** - * Should the node Network be returned. - */ - public boolean network() { - return this.network; - } - - /** - * Should the node Network be returned. - */ - public NodesStatsRequest network(boolean network) { - this.network = network; - return this; - } - /** * Should the node file system stats be returned. */ @@ -260,7 +242,6 @@ public class NodesStatsRequest extends BaseNodesRequest { process = in.readBoolean(); jvm = in.readBoolean(); threadPool = in.readBoolean(); - network = in.readBoolean(); fs = in.readBoolean(); transport = in.readBoolean(); http = in.readBoolean(); @@ -276,7 +257,6 @@ public class NodesStatsRequest extends BaseNodesRequest { out.writeBoolean(process); out.writeBoolean(jvm); out.writeBoolean(threadPool); - out.writeBoolean(network); out.writeBoolean(fs); out.writeBoolean(transport); out.writeBoolean(http); diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodesStatsRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodesStatsRequestBuilder.java index db8d774b39d..dfa8007f7cf 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodesStatsRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/NodesStatsRequestBuilder.java @@ -107,14 +107,6 @@ public class NodesStatsRequestBuilder extends NodesOperationRequestBuilder shardsStats = new ArrayList<>(); for (IndexService indexService : indicesService) { for (IndexShard indexShard : indexService) { diff --git a/core/src/main/java/org/elasticsearch/action/bulk/BulkProcessor.java b/core/src/main/java/org/elasticsearch/action/bulk/BulkProcessor.java index 43e433146bd..ab4366dd920 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/BulkProcessor.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/BulkProcessor.java @@ -145,6 +145,10 @@ public class BulkProcessor implements Closeable { } public static Builder builder(Client client, Listener listener) { + if (client == null) { + throw new NullPointerException("The client you specified while building a BulkProcessor is null"); + } + return new Builder(client, listener); } diff --git a/core/src/main/java/org/elasticsearch/cluster/ClusterModule.java b/core/src/main/java/org/elasticsearch/cluster/ClusterModule.java index 7a4a3e0722d..4720bb087dc 100644 --- a/core/src/main/java/org/elasticsearch/cluster/ClusterModule.java +++ b/core/src/main/java/org/elasticsearch/cluster/ClusterModule.java @@ -125,9 +125,9 @@ public class ClusterModule extends AbstractModule { private final Settings settings; private final DynamicSettings.Builder clusterDynamicSettings = new DynamicSettings.Builder(); private final DynamicSettings.Builder indexDynamicSettings = new DynamicSettings.Builder(); - private final ExtensionPoint.TypeExtensionPoint shardsAllocators = new ExtensionPoint.TypeExtensionPoint<>("shards_allocator", ShardsAllocator.class); - private final ExtensionPoint.SetExtensionPoint allocationDeciders = new ExtensionPoint.SetExtensionPoint<>("allocation_decider", AllocationDecider.class, AllocationDeciders.class); - private final ExtensionPoint.SetExtensionPoint indexTemplateFilters = new ExtensionPoint.SetExtensionPoint<>("index_template_filter", IndexTemplateFilter.class); + private final ExtensionPoint.SelectedType shardsAllocators = new ExtensionPoint.SelectedType<>("shards_allocator", ShardsAllocator.class); + private final ExtensionPoint.ClassSet allocationDeciders = new ExtensionPoint.ClassSet<>("allocation_decider", AllocationDecider.class, AllocationDeciders.class); + private final ExtensionPoint.ClassSet indexTemplateFilters = new ExtensionPoint.ClassSet<>("index_template_filter", IndexTemplateFilter.class); // pkg private so tests can mock Class extends ClusterInfoService> clusterInfoServiceImpl = InternalClusterInfoService.class; @@ -168,7 +168,7 @@ public class ClusterModule extends AbstractModule { registerClusterDynamicSetting(IndicesTTLService.INDICES_TTL_INTERVAL, Validator.TIME); registerClusterDynamicSetting(MappingUpdatedAction.INDICES_MAPPING_DYNAMIC_TIMEOUT, Validator.TIME); registerClusterDynamicSetting(MetaData.SETTING_READ_ONLY, Validator.EMPTY); - registerClusterDynamicSetting(RecoverySettings.INDICES_RECOVERY_FILE_CHUNK_SIZE, Validator.BYTES_SIZE); + registerClusterDynamicSetting(RecoverySettings.INDICES_RECOVERY_FILE_CHUNK_SIZE, Validator.POSITIVE_BYTES_SIZE); registerClusterDynamicSetting(RecoverySettings.INDICES_RECOVERY_TRANSLOG_OPS, Validator.INTEGER); registerClusterDynamicSetting(RecoverySettings.INDICES_RECOVERY_TRANSLOG_SIZE, Validator.BYTES_SIZE); registerClusterDynamicSetting(RecoverySettings.INDICES_RECOVERY_COMPRESS, Validator.EMPTY); diff --git a/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNode.java b/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNode.java index 8d63654e07e..cc5ff81e8d8 100644 --- a/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNode.java +++ b/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNode.java @@ -21,6 +21,7 @@ package org.elasticsearch.cluster.node; import com.google.common.collect.ImmutableList; import com.google.common.collect.ImmutableMap; + import org.elasticsearch.Version; import org.elasticsearch.common.Booleans; import org.elasticsearch.common.Strings; @@ -33,6 +34,7 @@ import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; +import java.net.InetAddress; import java.util.Map; import static org.elasticsearch.common.transport.TransportAddressSerializers.addressToStream; @@ -136,7 +138,7 @@ public class DiscoveryNode implements Streamable, ToXContent { * @param version the version of the node. */ public DiscoveryNode(String nodeName, String nodeId, TransportAddress address, Map attributes, Version version) { - this(nodeName, nodeId, NetworkUtils.getLocalHostName(""), NetworkUtils.getLocalHostAddress(""), address, attributes, version); + this(nodeName, nodeId, address.getHost(), address.getAddress(), address, attributes, version); } /** diff --git a/core/src/main/java/org/elasticsearch/cluster/service/InternalClusterService.java b/core/src/main/java/org/elasticsearch/cluster/service/InternalClusterService.java index 1fea2edec29..b992c3612ee 100644 --- a/core/src/main/java/org/elasticsearch/cluster/service/InternalClusterService.java +++ b/core/src/main/java/org/elasticsearch/cluster/service/InternalClusterService.java @@ -40,6 +40,8 @@ import org.elasticsearch.common.logging.ESLogger; import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.text.StringText; +import org.elasticsearch.common.transport.BoundTransportAddress; +import org.elasticsearch.common.transport.TransportAddress; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.concurrent.*; import org.elasticsearch.discovery.Discovery; @@ -159,7 +161,8 @@ public class InternalClusterService extends AbstractLifecycleComponent nodeAttributes = discoveryNodeService.buildAttributes(); // note, we rely on the fact that its a new id each time we start, see FD and "kill -9" handling final String nodeId = DiscoveryService.generateNodeId(settings); - DiscoveryNode localNode = new DiscoveryNode(settings.get("name"), nodeId, transportService.boundAddress().publishAddress(), nodeAttributes, version); + final TransportAddress publishAddress = transportService.boundAddress().publishAddress(); + DiscoveryNode localNode = new DiscoveryNode(settings.get("name"), nodeId, publishAddress, nodeAttributes, version); DiscoveryNodes.Builder nodeBuilder = DiscoveryNodes.builder().put(localNode).localNodeId(localNode.id()); this.clusterState = ClusterState.builder(clusterState).nodes(nodeBuilder).blocks(initialBlocks).build(); this.transportService.setLocalNode(localNode); diff --git a/core/src/main/java/org/elasticsearch/cluster/settings/Validator.java b/core/src/main/java/org/elasticsearch/cluster/settings/Validator.java index 12049abed9b..cb253dceadf 100644 --- a/core/src/main/java/org/elasticsearch/cluster/settings/Validator.java +++ b/core/src/main/java/org/elasticsearch/cluster/settings/Validator.java @@ -22,6 +22,7 @@ package org.elasticsearch.cluster.settings; import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.common.Booleans; +import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.TimeValue; import static org.elasticsearch.common.unit.ByteSizeValue.parseBytesSizeValue; @@ -228,6 +229,21 @@ public interface Validator { } }; + Validator POSITIVE_BYTES_SIZE = new Validator() { + @Override + public String validate(String setting, String value, ClusterState state) { + try { + ByteSizeValue byteSizeValue = parseBytesSizeValue(value, setting); + if (byteSizeValue.getBytes() <= 0) { + return setting + " must be a positive byte size value"; + } + } catch (ElasticsearchParseException ex) { + return ex.getMessage(); + } + return null; + } + }; + Validator PERCENTAGE = new Validator() { @Override public String validate(String setting, String value, ClusterState clusterState) { diff --git a/core/src/main/java/org/elasticsearch/common/geo/ShapesAvailability.java b/core/src/main/java/org/elasticsearch/common/geo/ShapesAvailability.java index fce18337728..8af203f2ce8 100644 --- a/core/src/main/java/org/elasticsearch/common/geo/ShapesAvailability.java +++ b/core/src/main/java/org/elasticsearch/common/geo/ShapesAvailability.java @@ -19,8 +19,6 @@ package org.elasticsearch.common.geo; -import org.elasticsearch.common.Classes; - /** */ public class ShapesAvailability { @@ -48,8 +46,5 @@ public class ShapesAvailability { JTS_AVAILABLE = xJTS_AVAILABLE; } - - private ShapesAvailability() { - - } + private ShapesAvailability() {} } diff --git a/core/src/main/java/org/elasticsearch/common/inject/spi/ProviderLookup.java b/core/src/main/java/org/elasticsearch/common/inject/spi/ProviderLookup.java index 44c35779def..06a732b192a 100644 --- a/core/src/main/java/org/elasticsearch/common/inject/spi/ProviderLookup.java +++ b/core/src/main/java/org/elasticsearch/common/inject/spi/ProviderLookup.java @@ -34,6 +34,31 @@ import static com.google.common.base.Preconditions.checkState; * @since 2.0 */ public final class ProviderLookup implements Element { + + // NOTE: this class is not part of guice and was added so the provder lookup's key can be acessible for tests + public static class ProviderImpl implements Provider { + private ProviderLookup lookup; + + private ProviderImpl(ProviderLookup lookup) { + this.lookup = lookup; + } + + @Override + public T get() { + checkState(lookup.delegate != null, + "This Provider cannot be used until the Injector has been created."); + return lookup.delegate.get(); + } + + @Override + public String toString() { + return "Provider<" + lookup.key.getTypeLiteral() + ">"; + } + + public Key getKey() { + return lookup.getKey(); + } + } private final Object source; private final Key key; private Provider delegate; @@ -86,18 +111,6 @@ public final class ProviderLookup implements Element { * IllegalStateException} if you try to use it beforehand. */ public Provider getProvider() { - return new Provider () { - @Override - public T get() { - checkState(delegate != null, - "This Provider cannot be used until the Injector has been created."); - return delegate.get(); - } - - @Override - public String toString() { - return "Provider<" + key.getTypeLiteral() + ">"; - } - }; + return new ProviderImpl<>(this); } } diff --git a/core/src/main/java/org/elasticsearch/common/logging/Loggers.java b/core/src/main/java/org/elasticsearch/common/logging/Loggers.java index 8f546c93e89..de657c07be2 100644 --- a/core/src/main/java/org/elasticsearch/common/logging/Loggers.java +++ b/core/src/main/java/org/elasticsearch/common/logging/Loggers.java @@ -20,6 +20,7 @@ package org.elasticsearch.common.logging; import com.google.common.collect.Lists; +import org.apache.lucene.util.SuppressForbidden; import org.elasticsearch.common.Classes; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.Index; @@ -74,20 +75,27 @@ public class Loggers { return getLogger(buildClassLoggerName(clazz), settings, prefixes); } + @SuppressForbidden(reason = "using localhost for logging on which host it is is fine") + private static InetAddress getHostAddress() { + try { + return InetAddress.getLocalHost(); + } catch (UnknownHostException e) { + return null; + } + } + public static ESLogger getLogger(String loggerName, Settings settings, String... prefixes) { List prefixesList = newArrayList(); if (settings.getAsBoolean("logger.logHostAddress", false)) { - try { - prefixesList.add(InetAddress.getLocalHost().getHostAddress()); - } catch (UnknownHostException e) { - // ignore + final InetAddress addr = getHostAddress(); + if (addr != null) { + prefixesList.add(addr.getHostAddress()); } } if (settings.getAsBoolean("logger.logHostName", false)) { - try { - prefixesList.add(InetAddress.getLocalHost().getHostName()); - } catch (UnknownHostException e) { - // ignore + final InetAddress addr = getHostAddress(); + if (addr != null) { + prefixesList.add(addr.getHostName()); } } String name = settings.get("name"); diff --git a/core/src/main/java/org/elasticsearch/common/network/NetworkService.java b/core/src/main/java/org/elasticsearch/common/network/NetworkService.java index bd45987f05c..9f6b77aa90d 100644 --- a/core/src/main/java/org/elasticsearch/common/network/NetworkService.java +++ b/core/src/main/java/org/elasticsearch/common/network/NetworkService.java @@ -28,11 +28,8 @@ import org.elasticsearch.common.unit.TimeValue; import java.io.IOException; import java.net.InetAddress; -import java.net.NetworkInterface; import java.net.UnknownHostException; -import java.util.Collection; import java.util.List; -import java.util.Locale; import java.util.concurrent.CopyOnWriteArrayList; import java.util.concurrent.TimeUnit; @@ -41,7 +38,8 @@ import java.util.concurrent.TimeUnit; */ public class NetworkService extends AbstractComponent { - public static final String LOCAL = "#local#"; + /** By default, we bind to loopback interfaces */ + public static final String DEFAULT_NETWORK_HOST = "_local_"; private static final String GLOBAL_NETWORK_HOST_SETTING = "network.host"; private static final String GLOBAL_NETWORK_BINDHOST_SETTING = "network.bind_host"; @@ -71,12 +69,12 @@ public class NetworkService extends AbstractComponent { /** * Resolves the default value if possible. If not, return null. */ - InetAddress resolveDefault(); + InetAddress[] resolveDefault(); /** * Resolves a custom value handling, return null if can't handle it. */ - InetAddress resolveIfPossible(String value); + InetAddress[] resolveIfPossible(String value); } private final List customNameResolvers = new CopyOnWriteArrayList<>(); @@ -94,100 +92,86 @@ public class NetworkService extends AbstractComponent { customNameResolvers.add(customNameResolver); } - - public InetAddress resolveBindHostAddress(String bindHost) throws IOException { - return resolveBindHostAddress(bindHost, InetAddress.getLoopbackAddress().getHostAddress()); - } - - public InetAddress resolveBindHostAddress(String bindHost, String defaultValue2) throws IOException { - return resolveInetAddress(bindHost, settings.get(GLOBAL_NETWORK_BINDHOST_SETTING, settings.get(GLOBAL_NETWORK_HOST_SETTING)), defaultValue2); - } - - public InetAddress resolvePublishHostAddress(String publishHost) throws IOException { - InetAddress address = resolvePublishHostAddress(publishHost, - InetAddress.getLoopbackAddress().getHostAddress()); - // verify that its not a local address - if (address == null || address.isAnyLocalAddress()) { - address = NetworkUtils.getFirstNonLoopbackAddress(NetworkUtils.StackType.IPv4); - if (address == null) { - address = NetworkUtils.getFirstNonLoopbackAddress(NetworkUtils.getIpStackType()); - if (address == null) { - address = NetworkUtils.getLocalAddress(); - if (address == null) { - return NetworkUtils.getLocalhost(NetworkUtils.StackType.IPv4); - } - } - } + public InetAddress[] resolveBindHostAddress(String bindHost) throws IOException { + // first check settings + if (bindHost == null) { + bindHost = settings.get(GLOBAL_NETWORK_BINDHOST_SETTING, settings.get(GLOBAL_NETWORK_HOST_SETTING)); } - return address; - } - - public InetAddress resolvePublishHostAddress(String publishHost, String defaultValue2) throws IOException { - return resolveInetAddress(publishHost, settings.get(GLOBAL_NETWORK_PUBLISHHOST_SETTING, settings.get(GLOBAL_NETWORK_HOST_SETTING)), defaultValue2); - } - - public InetAddress resolveInetAddress(String host, String defaultValue1, String defaultValue2) throws UnknownHostException, IOException { - if (host == null) { - host = defaultValue1; - } - if (host == null) { - host = defaultValue2; - } - if (host == null) { + // next check any registered custom resolvers + if (bindHost == null) { for (CustomNameResolver customNameResolver : customNameResolvers) { - InetAddress inetAddress = customNameResolver.resolveDefault(); - if (inetAddress != null) { - return inetAddress; + InetAddress addresses[] = customNameResolver.resolveDefault(); + if (addresses != null) { + return addresses; } } - return null; } - String origHost = host; + // finally, fill with our default + if (bindHost == null) { + bindHost = DEFAULT_NETWORK_HOST; + } + return resolveInetAddress(bindHost); + } + + // TODO: needs to be InetAddress[] + public InetAddress resolvePublishHostAddress(String publishHost) throws IOException { + // first check settings + if (publishHost == null) { + publishHost = settings.get(GLOBAL_NETWORK_PUBLISHHOST_SETTING, settings.get(GLOBAL_NETWORK_HOST_SETTING)); + } + // next check any registered custom resolvers + if (publishHost == null) { + for (CustomNameResolver customNameResolver : customNameResolvers) { + InetAddress addresses[] = customNameResolver.resolveDefault(); + if (addresses != null) { + return addresses[0]; + } + } + } + // finally, fill with our default + if (publishHost == null) { + publishHost = DEFAULT_NETWORK_HOST; + } + // TODO: allow publishing multiple addresses + return resolveInetAddress(publishHost)[0]; + } + + private InetAddress[] resolveInetAddress(String host) throws UnknownHostException, IOException { if ((host.startsWith("#") && host.endsWith("#")) || (host.startsWith("_") && host.endsWith("_"))) { host = host.substring(1, host.length() - 1); - + // allow custom resolvers to have special names for (CustomNameResolver customNameResolver : customNameResolvers) { - InetAddress inetAddress = customNameResolver.resolveIfPossible(host); - if (inetAddress != null) { - return inetAddress; + InetAddress addresses[] = customNameResolver.resolveIfPossible(host); + if (addresses != null) { + return addresses; } } - - if (host.equals("local")) { - return NetworkUtils.getLocalAddress(); - } else if (host.startsWith("non_loopback")) { - if (host.toLowerCase(Locale.ROOT).endsWith(":ipv4")) { - return NetworkUtils.getFirstNonLoopbackAddress(NetworkUtils.StackType.IPv4); - } else if (host.toLowerCase(Locale.ROOT).endsWith(":ipv6")) { - return NetworkUtils.getFirstNonLoopbackAddress(NetworkUtils.StackType.IPv6); - } else { - return NetworkUtils.getFirstNonLoopbackAddress(NetworkUtils.getIpStackType()); - } - } else { - NetworkUtils.StackType stackType = NetworkUtils.getIpStackType(); - if (host.toLowerCase(Locale.ROOT).endsWith(":ipv4")) { - stackType = NetworkUtils.StackType.IPv4; - host = host.substring(0, host.length() - 5); - } else if (host.toLowerCase(Locale.ROOT).endsWith(":ipv6")) { - stackType = NetworkUtils.StackType.IPv6; - host = host.substring(0, host.length() - 5); - } - Collection allInterfs = NetworkUtils.getAllAvailableInterfaces(); - for (NetworkInterface ni : allInterfs) { - if (!ni.isUp()) { - continue; + switch (host) { + case "local": + return NetworkUtils.getLoopbackAddresses(); + case "local:ipv4": + return NetworkUtils.filterIPV4(NetworkUtils.getLoopbackAddresses()); + case "local:ipv6": + return NetworkUtils.filterIPV6(NetworkUtils.getLoopbackAddresses()); + case "non_loopback": + return NetworkUtils.getFirstNonLoopbackAddresses(); + case "non_loopback:ipv4": + return NetworkUtils.filterIPV4(NetworkUtils.getFirstNonLoopbackAddresses()); + case "non_loopback:ipv6": + return NetworkUtils.filterIPV6(NetworkUtils.getFirstNonLoopbackAddresses()); + default: + /* an interface specification */ + if (host.endsWith(":ipv4")) { + host = host.substring(0, host.length() - 5); + return NetworkUtils.filterIPV4(NetworkUtils.getAddressesForInterface(host)); + } else if (host.endsWith(":ipv6")) { + host = host.substring(0, host.length() - 5); + return NetworkUtils.filterIPV6(NetworkUtils.getAddressesForInterface(host)); + } else { + return NetworkUtils.getAddressesForInterface(host); } - if (host.equals(ni.getName()) || host.equals(ni.getDisplayName())) { - if (ni.isLoopback()) { - return NetworkUtils.getFirstAddress(ni, stackType); - } else { - return NetworkUtils.getFirstNonLoopbackAddress(ni, stackType); - } - } - } } - throw new IOException("Failed to find network interface for [" + origHost + "]"); } - return InetAddress.getByName(host); + return NetworkUtils.getAllByName(host); } } diff --git a/core/src/main/java/org/elasticsearch/common/network/NetworkUtils.java b/core/src/main/java/org/elasticsearch/common/network/NetworkUtils.java index 67710f9da6b..39705e82905 100644 --- a/core/src/main/java/org/elasticsearch/common/network/NetworkUtils.java +++ b/core/src/main/java/org/elasticsearch/common/network/NetworkUtils.java @@ -19,303 +19,194 @@ package org.elasticsearch.common.network; -import com.google.common.collect.Lists; import org.apache.lucene.util.BytesRef; -import org.apache.lucene.util.CollectionUtil; import org.apache.lucene.util.Constants; import org.elasticsearch.common.logging.ESLogger; import org.elasticsearch.common.logging.Loggers; -import java.net.*; -import java.util.*; +import java.net.Inet4Address; +import java.net.Inet6Address; +import java.net.InetAddress; +import java.net.NetworkInterface; +import java.net.SocketException; +import java.net.UnknownHostException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.Comparator; +import java.util.List; /** - * + * Utilities for network interfaces / addresses */ public abstract class NetworkUtils { + /** no instantation */ + private NetworkUtils() {} + + /** + * By default we bind to any addresses on an interface/name, unless restricted by :ipv4 etc. + * This property is unrelated to that, this is about what we *publish*. Today the code pretty much + * expects one address so this is used for the sort order. + * @deprecated transition mechanism only + */ + @Deprecated + static final boolean PREFER_V6 = Boolean.parseBoolean(System.getProperty("java.net.preferIPv6Addresses", "false")); + + /** Sorts an address by preference. This way code like publishing can just pick the first one */ + static int sortKey(InetAddress address, boolean prefer_v6) { + int key = address.getAddress().length; + if (prefer_v6) { + key = -key; + } + + if (address.isAnyLocalAddress()) { + key += 5; + } + if (address.isMulticastAddress()) { + key += 4; + } + if (address.isLoopbackAddress()) { + key += 3; + } + if (address.isLinkLocalAddress()) { + key += 2; + } + if (address.isSiteLocalAddress()) { + key += 1; + } + + return key; + } + + /** + * Sorts addresses by order of preference. This is used to pick the first one for publishing + * @deprecated remove this when multihoming is really correct + */ + @Deprecated + private static void sortAddresses(List list) { + Collections.sort(list, new Comparator () { + @Override + public int compare(InetAddress left, InetAddress right) { + int cmp = Integer.compare(sortKey(left, PREFER_V6), sortKey(right, PREFER_V6)); + if (cmp == 0) { + cmp = new BytesRef(left.getAddress()).compareTo(new BytesRef(right.getAddress())); + } + return cmp; + } + }); + } + private final static ESLogger logger = Loggers.getLogger(NetworkUtils.class); - public static enum StackType { - IPv4, IPv6, Unknown + /** Return all interfaces (and subinterfaces) on the system */ + static List getInterfaces() throws SocketException { + List all = new ArrayList<>(); + addAllInterfaces(all, Collections.list(NetworkInterface.getNetworkInterfaces())); + Collections.sort(all, new Comparator () { + @Override + public int compare(NetworkInterface left, NetworkInterface right) { + return Integer.compare(left.getIndex(), right.getIndex()); + } + }); + return all; } - - public static final String IPv4_SETTING = "java.net.preferIPv4Stack"; - public static final String IPv6_SETTING = "java.net.preferIPv6Addresses"; - - public static final String NON_LOOPBACK_ADDRESS = "non_loopback_address"; - - private final static InetAddress localAddress; - - static { - InetAddress localAddressX; - try { - localAddressX = InetAddress.getLocalHost(); - } catch (Throwable e) { - logger.warn("failed to resolve local host, fallback to loopback", e); - localAddressX = InetAddress.getLoopbackAddress(); + + /** Helper for getInterfaces, recursively adds subinterfaces to {@code target} */ + private static void addAllInterfaces(List target, List level) { + if (!level.isEmpty()) { + target.addAll(level); + for (NetworkInterface intf : level) { + addAllInterfaces(target, Collections.list(intf.getSubInterfaces())); + } } - localAddress = localAddressX; } - + + /** Returns system default for SO_REUSEADDR */ public static boolean defaultReuseAddress() { return Constants.WINDOWS ? false : true; } - - public static boolean isIPv4() { - return System.getProperty("java.net.preferIPv4Stack") != null && System.getProperty("java.net.preferIPv4Stack").equals("true"); - } - - public static InetAddress getIPv4Localhost() throws UnknownHostException { - return getLocalhost(StackType.IPv4); - } - - public static InetAddress getIPv6Localhost() throws UnknownHostException { - return getLocalhost(StackType.IPv6); - } - - public static InetAddress getLocalAddress() { - return localAddress; - } - - public static String getLocalHostName(String defaultHostName) { - if (localAddress == null) { - return defaultHostName; - } - String hostName = localAddress.getHostName(); - if (hostName == null) { - return defaultHostName; - } - return hostName; - } - - public static String getLocalHostAddress(String defaultHostAddress) { - if (localAddress == null) { - return defaultHostAddress; - } - String hostAddress = localAddress.getHostAddress(); - if (hostAddress == null) { - return defaultHostAddress; - } - return hostAddress; - } - - public static InetAddress getLocalhost(StackType ip_version) throws UnknownHostException { - if (ip_version == StackType.IPv4) - return InetAddress.getByName("127.0.0.1"); - else - return InetAddress.getByName("::1"); - } - - /** - * Returns the first non-loopback address on any interface on the current host. - * - * @param ip_version Constraint on IP version of address to be returned, 4 or 6 - */ - public static InetAddress getFirstNonLoopbackAddress(StackType ip_version) throws SocketException { - InetAddress address; + + /** Returns addresses for all loopback interfaces that are up. */ + public static InetAddress[] getLoopbackAddresses() throws SocketException { + List list = new ArrayList<>(); for (NetworkInterface intf : getInterfaces()) { - try { - if (!intf.isUp() || intf.isLoopback()) - continue; - } catch (Exception e) { - // might happen when calling on a network interface that does not exists - continue; - } - address = getFirstNonLoopbackAddress(intf, ip_version); - if (address != null) { - return address; + if (intf.isLoopback() && intf.isUp()) { + list.addAll(Collections.list(intf.getInetAddresses())); } } - - return null; - } - - private static List getInterfaces() throws SocketException { - Enumeration intfs = NetworkInterface.getNetworkInterfaces(); - - List intfsList = Lists.newArrayList(); - while (intfs.hasMoreElements()) { - intfsList.add((NetworkInterface) intfs.nextElement()); + if (list.isEmpty()) { + throw new IllegalArgumentException("No up-and-running loopback interfaces found, got " + getInterfaces()); } - - sortInterfaces(intfsList); - return intfsList; + sortAddresses(list); + return list.toArray(new InetAddress[list.size()]); } - - private static void sortInterfaces(List intfsList) { - // order by index, assuming first ones are more interesting - CollectionUtil.timSort(intfsList, new Comparator () { - @Override - public int compare(NetworkInterface o1, NetworkInterface o2) { - return Integer.compare (o1.getIndex(), o2.getIndex()); - } - }); - } - - - /** - * Returns the first non-loopback address on the given interface on the current host. - * - * @param intf the interface to be checked - * @param ipVersion Constraint on IP version of address to be returned, 4 or 6 - */ - public static InetAddress getFirstNonLoopbackAddress(NetworkInterface intf, StackType ipVersion) throws SocketException { - if (intf == null) - throw new IllegalArgumentException("Network interface pointer is null"); - - for (Enumeration addresses = intf.getInetAddresses(); addresses.hasMoreElements(); ) { - InetAddress address = (InetAddress) addresses.nextElement(); - if (!address.isLoopbackAddress()) { - if ((address instanceof Inet4Address && ipVersion == StackType.IPv4) || - (address instanceof Inet6Address && ipVersion == StackType.IPv6)) - return address; + + /** Returns addresses for the first non-loopback interface that is up. */ + public static InetAddress[] getFirstNonLoopbackAddresses() throws SocketException { + List list = new ArrayList<>(); + for (NetworkInterface intf : getInterfaces()) { + if (intf.isLoopback() == false && intf.isUp()) { + list.addAll(Collections.list(intf.getInetAddresses())); + break; } } - return null; - } - - /** - * Returns the first address with the proper ipVersion on the given interface on the current host. - * - * @param intf the interface to be checked - * @param ipVersion Constraint on IP version of address to be returned, 4 or 6 - */ - public static InetAddress getFirstAddress(NetworkInterface intf, StackType ipVersion) throws SocketException { - if (intf == null) - throw new IllegalArgumentException("Network interface pointer is null"); - - for (Enumeration addresses = intf.getInetAddresses(); addresses.hasMoreElements(); ) { - InetAddress address = (InetAddress) addresses.nextElement(); - if ((address instanceof Inet4Address && ipVersion == StackType.IPv4) || - (address instanceof Inet6Address && ipVersion == StackType.IPv6)) - return address; + if (list.isEmpty()) { + throw new IllegalArgumentException("No up-and-running non-loopback interfaces found, got " + getInterfaces()); } - return null; + sortAddresses(list); + return list.toArray(new InetAddress[list.size()]); } - - /** - * A function to check if an interface supports an IP version (i.e has addresses - * defined for that IP version). - * - * @param intf - * @return - */ - public static boolean interfaceHasIPAddresses(NetworkInterface intf, StackType ipVersion) throws SocketException, UnknownHostException { - boolean supportsVersion = false; - if (intf != null) { - // get all the InetAddresses defined on the interface - Enumeration addresses = intf.getInetAddresses(); - while (addresses != null && addresses.hasMoreElements()) { - // get the next InetAddress for the current interface - InetAddress address = (InetAddress) addresses.nextElement(); - - // check if we find an address of correct version - if ((address instanceof Inet4Address && (ipVersion == StackType.IPv4)) || - (address instanceof Inet6Address && (ipVersion == StackType.IPv6))) { - supportsVersion = true; - break; - } - } - } else { - throw new UnknownHostException("network interface not found"); + + /** Returns addresses for the given interface (it must be marked up) */ + public static InetAddress[] getAddressesForInterface(String name) throws SocketException { + NetworkInterface intf = NetworkInterface.getByName(name); + if (intf == null) { + throw new IllegalArgumentException("No interface named '" + name + "' found, got " + getInterfaces()); } - return supportsVersion; + if (!intf.isUp()) { + throw new IllegalArgumentException("Interface '" + name + "' is not up and running"); + } + List list = Collections.list(intf.getInetAddresses()); + if (list.isEmpty()) { + throw new IllegalArgumentException("Interface '" + name + "' has no internet addresses"); + } + sortAddresses(list); + return list.toArray(new InetAddress[list.size()]); } - - /** - * Tries to determine the type of IP stack from the available interfaces and their addresses and from the - * system properties (java.net.preferIPv4Stack and java.net.preferIPv6Addresses) - * - * @return StackType.IPv4 for an IPv4 only stack, StackYTypeIPv6 for an IPv6 only stack, and StackType.Unknown - * if the type cannot be detected - */ - public static StackType getIpStackType() { - boolean isIPv4StackAvailable = isStackAvailable(true); - boolean isIPv6StackAvailable = isStackAvailable(false); - - // if only IPv4 stack available - if (isIPv4StackAvailable && !isIPv6StackAvailable) { - return StackType.IPv4; - } - // if only IPv6 stack available - else if (isIPv6StackAvailable && !isIPv4StackAvailable) { - return StackType.IPv6; - } - // if dual stack - else if (isIPv4StackAvailable && isIPv6StackAvailable) { - // get the System property which records user preference for a stack on a dual stack machine - if (Boolean.getBoolean(IPv4_SETTING)) // has preference over java.net.preferIPv6Addresses - return StackType.IPv4; - if (Boolean.getBoolean(IPv6_SETTING)) - return StackType.IPv6; - return StackType.IPv6; - } - return StackType.Unknown; + + /** Returns addresses for the given host, sorted by order of preference */ + public static InetAddress[] getAllByName(String host) throws UnknownHostException { + InetAddress addresses[] = InetAddress.getAllByName(host); + sortAddresses(Arrays.asList(addresses)); + return addresses; } - - - public static boolean isStackAvailable(boolean ipv4) { - Collection allAddrs = getAllAvailableAddresses(); - for (InetAddress addr : allAddrs) - if (ipv4 && addr instanceof Inet4Address || (!ipv4 && addr instanceof Inet6Address)) - return true; - return false; - } - - - /** - * Returns all the available interfaces, including first level sub interfaces. - */ - public static List getAllAvailableInterfaces() throws SocketException { - List allInterfaces = new ArrayList<>(); - for (Enumeration interfaces = NetworkInterface.getNetworkInterfaces(); interfaces.hasMoreElements(); ) { - NetworkInterface intf = interfaces.nextElement(); - allInterfaces.add(intf); - - Enumeration subInterfaces = intf.getSubInterfaces(); - if (subInterfaces != null && subInterfaces.hasMoreElements()) { - while (subInterfaces.hasMoreElements()) { - allInterfaces.add(subInterfaces.nextElement()); - } + + /** Returns only the IPV4 addresses in {@code addresses} */ + public static InetAddress[] filterIPV4(InetAddress addresses[]) { + List list = new ArrayList<>(); + for (InetAddress address : addresses) { + if (address instanceof Inet4Address) { + list.add(address); } } - sortInterfaces(allInterfaces); - return allInterfaces; - } - - public static Collection getAllAvailableAddresses() { - // we want consistent order here. - final Set retval = new TreeSet<>(new Comparator () { - BytesRef left = new BytesRef(); - BytesRef right = new BytesRef(); - @Override - public int compare(InetAddress o1, InetAddress o2) { - return set(left, o1).compareTo(set(right, o1)); - } - - private BytesRef set(BytesRef ref, InetAddress addr) { - ref.bytes = addr.getAddress(); - ref.offset = 0; - ref.length = ref.bytes.length; - return ref; - } - }); - try { - for (NetworkInterface intf : getInterfaces()) { - Enumeration addrs = intf.getInetAddresses(); - while (addrs.hasMoreElements()) - retval.add(addrs.nextElement()); - } - } catch (SocketException e) { - logger.warn("Failed to derive all available interfaces", e); + if (list.isEmpty()) { + throw new IllegalArgumentException("No ipv4 addresses found in " + Arrays.toString(addresses)); } - - return retval; + return list.toArray(new InetAddress[list.size()]); } - - - private NetworkUtils() { - + + /** Returns only the IPV6 addresses in {@code addresses} */ + public static InetAddress[] filterIPV6(InetAddress addresses[]) { + List list = new ArrayList<>(); + for (InetAddress address : addresses) { + if (address instanceof Inet6Address) { + list.add(address); + } + } + if (list.isEmpty()) { + throw new IllegalArgumentException("No ipv6 addresses found in " + Arrays.toString(addresses)); + } + return list.toArray(new InetAddress[list.size()]); } } diff --git a/core/src/main/java/org/elasticsearch/common/transport/DummyTransportAddress.java b/core/src/main/java/org/elasticsearch/common/transport/DummyTransportAddress.java index 47f089a1e14..74bcfecdc69 100644 --- a/core/src/main/java/org/elasticsearch/common/transport/DummyTransportAddress.java +++ b/core/src/main/java/org/elasticsearch/common/transport/DummyTransportAddress.java @@ -44,6 +44,21 @@ public class DummyTransportAddress implements TransportAddress { return other == INSTANCE; } + @Override + public String getHost() { + return "dummy"; + } + + @Override + public String getAddress() { + return "0.0.0.0"; // see https://en.wikipedia.org/wiki/0.0.0.0 + } + + @Override + public int getPort() { + return 42; + } + @Override public DummyTransportAddress readFrom(StreamInput in) throws IOException { return INSTANCE; diff --git a/core/src/main/java/org/elasticsearch/common/transport/InetSocketTransportAddress.java b/core/src/main/java/org/elasticsearch/common/transport/InetSocketTransportAddress.java index a13e24f3c3b..f4f686ff2e5 100644 --- a/core/src/main/java/org/elasticsearch/common/transport/InetSocketTransportAddress.java +++ b/core/src/main/java/org/elasticsearch/common/transport/InetSocketTransportAddress.java @@ -30,7 +30,7 @@ import java.net.InetSocketAddress; /** * A transport address used for IP socket address (wraps {@link java.net.InetSocketAddress}). */ -public class InetSocketTransportAddress implements TransportAddress { +public final class InetSocketTransportAddress implements TransportAddress { private static boolean resolveAddress = false; @@ -92,6 +92,21 @@ public class InetSocketTransportAddress implements TransportAddress { address.getAddress().equals(((InetSocketTransportAddress) other).address.getAddress()); } + @Override + public String getHost() { + return address.getHostName(); + } + + @Override + public String getAddress() { + return address.getAddress().getHostAddress(); + } + + @Override + public int getPort() { + return address.getPort(); + } + public InetSocketAddress address() { return this.address; } diff --git a/core/src/main/java/org/elasticsearch/common/transport/LocalTransportAddress.java b/core/src/main/java/org/elasticsearch/common/transport/LocalTransportAddress.java index 8935275e222..e3efa20af18 100644 --- a/core/src/main/java/org/elasticsearch/common/transport/LocalTransportAddress.java +++ b/core/src/main/java/org/elasticsearch/common/transport/LocalTransportAddress.java @@ -29,7 +29,7 @@ import java.io.IOException; /** * */ -public class LocalTransportAddress implements TransportAddress { +public final class LocalTransportAddress implements TransportAddress { public static final LocalTransportAddress PROTO = new LocalTransportAddress("_na"); @@ -57,6 +57,21 @@ public class LocalTransportAddress implements TransportAddress { return other instanceof LocalTransportAddress && id.equals(((LocalTransportAddress) other).id); } + @Override + public String getHost() { + return "local"; + } + + @Override + public String getAddress() { + return "0.0.0.0"; // see https://en.wikipedia.org/wiki/0.0.0.0 + } + + @Override + public int getPort() { + return 0; + } + @Override public LocalTransportAddress readFrom(StreamInput in) throws IOException { return new LocalTransportAddress(in); diff --git a/core/src/main/java/org/elasticsearch/common/transport/TransportAddress.java b/core/src/main/java/org/elasticsearch/common/transport/TransportAddress.java index c5051fadbe6..910b1fc6af2 100644 --- a/core/src/main/java/org/elasticsearch/common/transport/TransportAddress.java +++ b/core/src/main/java/org/elasticsearch/common/transport/TransportAddress.java @@ -28,7 +28,24 @@ import org.elasticsearch.common.io.stream.Writeable; */ public interface TransportAddress extends Writeable { + /** + * Returns the host string for this transport address + */ + String getHost(); + + /** + * Returns the address string for this transport address + */ + String getAddress(); + + /** + * Returns the port of this transport address if applicable + */ + int getPort(); + short uniqueAddressTypeId(); boolean sameHost(TransportAddress other); + + public String toString(); } diff --git a/core/src/main/java/org/elasticsearch/common/util/ExtensionPoint.java b/core/src/main/java/org/elasticsearch/common/util/ExtensionPoint.java index 56163378327..4a5b3fc1a58 100644 --- a/core/src/main/java/org/elasticsearch/common/util/ExtensionPoint.java +++ b/core/src/main/java/org/elasticsearch/common/util/ExtensionPoint.java @@ -31,21 +31,18 @@ import java.util.*; * all extensions by a single name and ensures that extensions are not registered * more than once. */ -public abstract class ExtensionPoint { +public abstract class ExtensionPoint { protected final String name; - protected final Class extensionClass; protected final Class>[] singletons; /** * Creates a new extension point * * @param name the human readable underscore case name of the extension point. This is used in error messages etc. - * @param extensionClass the base class that should be extended * @param singletons a list of singletons to bind with this extension point - these are bound in {@link #bind(Binder)} */ - public ExtensionPoint(String name, Class extensionClass, Class>... singletons) { + public ExtensionPoint(String name, Class>... singletons) { this.name = name; - this.extensionClass = extensionClass; this.singletons = singletons; } @@ -62,29 +59,30 @@ public abstract class ExtensionPoint { } /** - * Subclasses can bind their type, map or set exentions here. + * Subclasses can bind their type, map or set extensions here. */ protected abstract void bindExtensions(Binder binder); /** * A map based extension point which allows to register keyed implementations ie. parsers or some kind of strategies. */ - public static class MapExtensionPoint extends ExtensionPoint { + public static class ClassMap extends ExtensionPoint { + protected final Class extensionClass; private final Map > extensions = new HashMap<>(); private final Set reservedKeys; /** - * Creates a new {@link org.elasticsearch.common.util.ExtensionPoint.MapExtensionPoint} + * Creates a new {@link ClassMap} * * @param name the human readable underscore case name of the extension poing. This is used in error messages etc. * @param extensionClass the base class that should be extended * @param singletons a list of singletons to bind with this extension point - these are bound in {@link #bind(Binder)} * @param reservedKeys a set of reserved keys by internal implementations */ - public MapExtensionPoint(String name, Class extensionClass, Set reservedKeys, Class>... singletons) { - super(name, extensionClass, singletons); + public ClassMap(String name, Class extensionClass, Set reservedKeys, Class>... singletons) { + super(name, singletons); + this.extensionClass = extensionClass; this.reservedKeys = reservedKeys; - } /** @@ -118,13 +116,13 @@ public abstract class ExtensionPoint { } /** - * A Type extension point which basically allows to registerd keyed extensions like {@link org.elasticsearch.common.util.ExtensionPoint.MapExtensionPoint} + * A Type extension point which basically allows to registerd keyed extensions like {@link ClassMap} * but doesn't instantiate and bind all the registered key value pairs but instead replace a singleton based on a given setting via {@link #bindType(Binder, Settings, String, String)} * Note: {@link #bind(Binder)} is not supported by this class */ - public static final class TypeExtensionPoint extends MapExtensionPoint { + public static final class SelectedType extends ClassMap { - public TypeExtensionPoint(String name, Class extensionClass) { + public SelectedType(String name, Class extensionClass) { super(name, extensionClass, Collections.EMPTY_SET); } @@ -156,18 +154,20 @@ public abstract class ExtensionPoint { /** * A set based extension point which allows to register extended classes that might be used to chain additional functionality etc. */ - public final static class SetExtensionPoint extends ExtensionPoint { + public final static class ClassSet extends ExtensionPoint { + protected final Class extensionClass; private final Set > extensions = new HashSet<>(); /** - * Creates a new {@link org.elasticsearch.common.util.ExtensionPoint.SetExtensionPoint} + * Creates a new {@link ClassSet} * * @param name the human readable underscore case name of the extension poing. This is used in error messages etc. * @param extensionClass the base class that should be extended * @param singletons a list of singletons to bind with this extension point - these are bound in {@link #bind(Binder)} */ - public SetExtensionPoint(String name, Class extensionClass, Class>... singletons) { - super(name, extensionClass, singletons); + public ClassSet(String name, Class extensionClass, Class>... singletons) { + super(name, singletons); + this.extensionClass = extensionClass; } /** @@ -191,4 +191,46 @@ public abstract class ExtensionPoint { } } } + + /** + * A an instance of a map, mapping one instance value to another. Both key and value are instances, not classes + * like with other extension points. + */ + public final static class InstanceMap extends ExtensionPoint { + private final Map map = new HashMap<>(); + private final Class keyType; + private final Class valueType; + + /** + * Creates a new {@link ClassSet} + * + * @param name the human readable underscore case name of the extension point. This is used in error messages. + * @param singletons a list of singletons to bind with this extension point - these are bound in {@link #bind(Binder)} + */ + public InstanceMap(String name, Class keyType, Class valueType, Class>... singletons) { + super(name, singletons); + this.keyType = keyType; + this.valueType = valueType; + } + + /** + * Registers a mapping from {@param key} to {@param value} + * + * @throws IllegalArgumentException iff the key is already registered + */ + public final void registerExtension(K key, V value) { + V old = map.put(key, value); + if (old != null) { + throw new IllegalArgumentException("Cannot register [" + this.name + "] with key [" + key + "] to [" + value + "], already registered to [" + old + "]"); + } + } + + @Override + protected void bindExtensions(Binder binder) { + MapBinder mapBinder = MapBinder.newMapBinder(binder, keyType, valueType); + for (Map.Entry entry : map.entrySet()) { + mapBinder.addBinding(entry.getKey()).toInstance(entry.getValue()); + } + } + } } diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/ping/multicast/MulticastZenPing.java b/core/src/main/java/org/elasticsearch/discovery/zen/ping/multicast/MulticastZenPing.java index 97f872c3108..26e3a6ded76 100644 --- a/core/src/main/java/org/elasticsearch/discovery/zen/ping/multicast/MulticastZenPing.java +++ b/core/src/main/java/org/elasticsearch/discovery/zen/ping/multicast/MulticastZenPing.java @@ -131,7 +131,9 @@ public class MulticastZenPing extends AbstractLifecycleComponent implem boolean deferToInterface = settings.getAsBoolean("discovery.zen.ping.multicast.defer_group_to_set_interface", Constants.MAC_OS_X); multicastChannel = MulticastChannel.getChannel(nodeName(), shared, new MulticastChannel.Config(port, group, bufferSize, ttl, - networkService.resolvePublishHostAddress(address), + // don't use publish address, the use case for that is e.g. a firewall or proxy and + // may not even be bound to an interface on this machine! use the first bound address. + networkService.resolveBindHostAddress(address)[0], deferToInterface), new Receiver()); } catch (Throwable t) { diff --git a/core/src/main/java/org/elasticsearch/http/netty/NettyHttpServerTransport.java b/core/src/main/java/org/elasticsearch/http/netty/NettyHttpServerTransport.java index a89c0209cd3..664f7a8d0e4 100644 --- a/core/src/main/java/org/elasticsearch/http/netty/NettyHttpServerTransport.java +++ b/core/src/main/java/org/elasticsearch/http/netty/NettyHttpServerTransport.java @@ -51,6 +51,10 @@ import org.jboss.netty.handler.timeout.ReadTimeoutException; import java.io.IOException; import java.net.InetAddress; import java.net.InetSocketAddress; +import java.net.SocketAddress; +import java.util.ArrayList; +import java.util.List; +import java.util.Map; import java.util.concurrent.Executors; import java.util.concurrent.atomic.AtomicReference; @@ -128,7 +132,7 @@ public class NettyHttpServerTransport extends AbstractLifecycleComponent serverChannels = new ArrayList<>(); protected OpenChannelsHandler serverOpenChannels; @@ -243,33 +247,18 @@ public class NettyHttpServerTransport extends AbstractLifecycleComponent lastException = new AtomicReference<>(); - boolean success = portsRange.iterate(new PortsRange.PortCallback() { - @Override - public boolean onPortNumber(int portNumber) { - try { - serverChannel = serverBootstrap.bind(new InetSocketAddress(hostAddress, portNumber)); - } catch (Exception e) { - lastException.set(e); - return false; - } - return true; - } - }); - if (!success) { - throw new BindHttpException("Failed to bind to [" + port + "]", lastException.get()); + + for (InetAddress address : hostAddresses) { + bindAddress(address); } - InetSocketAddress boundAddress = (InetSocketAddress) serverChannel.getLocalAddress(); + InetSocketAddress boundAddress = (InetSocketAddress) serverChannels.get(0).getLocalAddress(); InetSocketAddress publishAddress; if (0 == publishPort) { publishPort = boundAddress.getPort(); @@ -281,12 +270,42 @@ public class NettyHttpServerTransport extends AbstractLifecycleComponent lastException = new AtomicReference<>(); + final AtomicReference boundSocket = new AtomicReference<>(); + boolean success = portsRange.iterate(new PortsRange.PortCallback() { + @Override + public boolean onPortNumber(int portNumber) { + try { + synchronized (serverChannels) { + Channel channel = serverBootstrap.bind(new InetSocketAddress(hostAddress, portNumber)); + serverChannels.add(channel); + boundSocket.set(channel.getLocalAddress()); + } + } catch (Exception e) { + lastException.set(e); + return false; + } + return true; + } + }); + if (!success) { + throw new BindHttpException("Failed to bind to [" + port + "]", lastException.get()); + } + logger.info("Bound http to address [{}]", boundSocket.get()); + } @Override protected void doStop() { - if (serverChannel != null) { - serverChannel.close().awaitUninterruptibly(); - serverChannel = null; + synchronized (serverChannels) { + if (serverChannels != null) { + for (Channel channel : serverChannels) { + channel.close().awaitUninterruptibly(); + } + serverChannels = null; + } } if (serverOpenChannels != null) { diff --git a/core/src/main/java/org/elasticsearch/index/cache/IndexCacheModule.java b/core/src/main/java/org/elasticsearch/index/cache/IndexCacheModule.java index fc0b4d08e1f..86e20490fa1 100644 --- a/core/src/main/java/org/elasticsearch/index/cache/IndexCacheModule.java +++ b/core/src/main/java/org/elasticsearch/index/cache/IndexCacheModule.java @@ -20,8 +20,8 @@ package org.elasticsearch.index.cache; import org.elasticsearch.common.inject.AbstractModule; -import org.elasticsearch.common.inject.Scopes; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.ExtensionPoint; import org.elasticsearch.index.cache.bitset.BitsetFilterCache; import org.elasticsearch.index.cache.query.QueryCache; import org.elasticsearch.index.cache.query.index.IndexQueryCache; @@ -35,24 +35,24 @@ public class IndexCacheModule extends AbstractModule { // for test purposes only public static final String QUERY_CACHE_EVERYTHING = "index.queries.cache.everything"; - private final Settings settings; + private final Settings indexSettings; + private final ExtensionPoint.SelectedType queryCaches; public IndexCacheModule(Settings settings) { - this.settings = settings; + this.indexSettings = settings; + this.queryCaches = new ExtensionPoint.SelectedType<>("query_cache", QueryCache.class); + + registerQueryCache(INDEX_QUERY_CACHE, IndexQueryCache.class); + registerQueryCache(NONE_QUERY_CACHE, NoneQueryCache.class); + } + + public void registerQueryCache(String name, Class extends QueryCache> clazz) { + queryCaches.registerExtension(name, clazz); } @Override protected void configure() { - String queryCacheType = settings.get(QUERY_CACHE_TYPE, INDEX_QUERY_CACHE); - Class extends QueryCache> queryCacheImpl; - if (queryCacheType.equals(INDEX_QUERY_CACHE)) { - queryCacheImpl = IndexQueryCache.class; - } else if (queryCacheType.equals(NONE_QUERY_CACHE)) { - queryCacheImpl = NoneQueryCache.class; - } else { - throw new IllegalArgumentException("Unknown QueryCache type [" + queryCacheType + "]"); - } - bind(QueryCache.class).to(queryCacheImpl).in(Scopes.SINGLETON); + queryCaches.bindType(binder(), indexSettings, QUERY_CACHE_TYPE, INDEX_QUERY_CACHE); bind(BitsetFilterCache.class).asEagerSingleton(); bind(IndexCache.class).asEagerSingleton(); } diff --git a/core/src/main/java/org/elasticsearch/index/engine/IndexSearcherWrapper.java b/core/src/main/java/org/elasticsearch/index/engine/IndexSearcherWrapper.java index c8a75f447b7..665d17a2f86 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/IndexSearcherWrapper.java +++ b/core/src/main/java/org/elasticsearch/index/engine/IndexSearcherWrapper.java @@ -36,10 +36,12 @@ public interface IndexSearcherWrapper { DirectoryReader wrap(DirectoryReader reader); /** - * @param searcher The provided index searcher to be wrapped to add custom functionality + * @param engineConfig The engine config which can be used to get the query cache and query cache policy from + * when creating a new index searcher + * @param searcher The provided index searcher to be wrapped to add custom functionality * @return a new index searcher wrapping the provided index searcher or if no wrapping was performed * the provided index searcher */ - IndexSearcher wrap(IndexSearcher searcher) throws EngineException; + IndexSearcher wrap(EngineConfig engineConfig, IndexSearcher searcher) throws EngineException; } diff --git a/core/src/main/java/org/elasticsearch/index/engine/IndexSearcherWrappingService.java b/core/src/main/java/org/elasticsearch/index/engine/IndexSearcherWrappingService.java index a0ea90e024e..23d05f01dc7 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/IndexSearcherWrappingService.java +++ b/core/src/main/java/org/elasticsearch/index/engine/IndexSearcherWrappingService.java @@ -77,7 +77,7 @@ public final class IndexSearcherWrappingService { // TODO: Right now IndexSearcher isn't wrapper friendly, when it becomes wrapper friendly we should revise this extension point // For example if IndexSearcher#rewrite() is overwritten than also IndexSearcher#createNormalizedWeight needs to be overwritten // This needs to be fixed before we can allow the IndexSearcher from Engine to be wrapped multiple times - IndexSearcher indexSearcher = wrapper.wrap(innerIndexSearcher); + IndexSearcher indexSearcher = wrapper.wrap(engineConfig, innerIndexSearcher); if (reader == engineSearcher.reader() && indexSearcher == innerIndexSearcher) { return engineSearcher; } else { diff --git a/core/src/main/java/org/elasticsearch/index/mapper/DocumentMapperParser.java b/core/src/main/java/org/elasticsearch/index/mapper/DocumentMapperParser.java index 5c6a10635ab..33281dc86f4 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/DocumentMapperParser.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/DocumentMapperParser.java @@ -101,8 +101,7 @@ public class DocumentMapperParser { .put(ObjectMapper.NESTED_CONTENT_TYPE, new ObjectMapper.TypeParser()) .put(TypeParsers.MULTI_FIELD_CONTENT_TYPE, TypeParsers.multiFieldConverterTypeParser) .put(CompletionFieldMapper.CONTENT_TYPE, new CompletionFieldMapper.TypeParser()) - .put(GeoPointFieldMapper.CONTENT_TYPE, new GeoPointFieldMapper.TypeParser()) - .put(Murmur3FieldMapper.CONTENT_TYPE, new Murmur3FieldMapper.TypeParser()); + .put(GeoPointFieldMapper.CONTENT_TYPE, new GeoPointFieldMapper.TypeParser()); if (ShapesAvailability.JTS_AVAILABLE) { typeParsersBuilder.put(GeoShapeFieldMapper.CONTENT_TYPE, new GeoShapeFieldMapper.TypeParser()); diff --git a/core/src/main/java/org/elasticsearch/index/mapper/MapperBuilders.java b/core/src/main/java/org/elasticsearch/index/mapper/MapperBuilders.java index 73c92ded424..41b657a73ca 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/MapperBuilders.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/MapperBuilders.java @@ -84,10 +84,6 @@ public final class MapperBuilders { return new LongFieldMapper.Builder(name); } - public static Murmur3FieldMapper.Builder murmur3Field(String name) { - return new Murmur3FieldMapper.Builder(name); - } - public static FloatFieldMapper.Builder floatField(String name) { return new FloatFieldMapper.Builder(name); } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapper.java index 0458c410b5b..8addceff7e0 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapper.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapper.java @@ -85,6 +85,8 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper public static final String LON_SUFFIX = "." + LON; public static final String GEOHASH = "geohash"; public static final String GEOHASH_SUFFIX = "." + GEOHASH; + public static final String IGNORE_MALFORMED = "ignore_malformed"; + public static final String COERCE = "coerce"; } public static class Defaults { @@ -93,10 +95,9 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper public static final boolean ENABLE_GEOHASH = false; public static final boolean ENABLE_GEOHASH_PREFIX = false; public static final int GEO_HASH_PRECISION = GeoHashUtils.PRECISION; - public static final boolean NORMALIZE_LAT = true; - public static final boolean NORMALIZE_LON = true; - public static final boolean VALIDATE_LAT = true; - public static final boolean VALIDATE_LON = true; + + public static final boolean IGNORE_MALFORMED = false; + public static final boolean COERCE = false; public static final MappedFieldType FIELD_TYPE = new GeoPointFieldType(); @@ -215,6 +216,7 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper @Override public Mapper.Builder, ?> parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { Builder builder = geoPointField(name); + final boolean indexCreatedBeforeV2_0 = parserContext.indexVersionCreated().before(Version.V_2_0_0); parseField(builder, name, node, parserContext); for (Iterator > iterator = node.entrySet().iterator(); iterator.hasNext();) { Map.Entry entry = iterator.next(); @@ -245,25 +247,42 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper builder.geoHashPrecision(GeoUtils.geoHashLevelsForPrecision(fieldNode.toString())); } iterator.remove(); - } else if (fieldName.equals("validate")) { - builder.fieldType().setValidateLat(XContentMapValues.nodeBooleanValue(fieldNode)); - builder.fieldType().setValidateLon(XContentMapValues.nodeBooleanValue(fieldNode)); + } else if (fieldName.equals(Names.IGNORE_MALFORMED)) { + if (builder.fieldType().coerce == false) { + builder.fieldType().ignoreMalformed = XContentMapValues.nodeBooleanValue(fieldNode); + } iterator.remove(); - } else if (fieldName.equals("validate_lon")) { - builder.fieldType().setValidateLon(XContentMapValues.nodeBooleanValue(fieldNode)); + } else if (indexCreatedBeforeV2_0 && fieldName.equals("validate")) { + if (builder.fieldType().ignoreMalformed == false) { + builder.fieldType().ignoreMalformed = !XContentMapValues.nodeBooleanValue(fieldNode); + } + iterator.remove(); + } else if (indexCreatedBeforeV2_0 && fieldName.equals("validate_lon")) { + if (builder.fieldType().ignoreMalformed() == false) { + builder.fieldType().ignoreMalformed = !XContentMapValues.nodeBooleanValue(fieldNode); + } iterator.remove(); - } else if (fieldName.equals("validate_lat")) { - builder.fieldType().setValidateLat(XContentMapValues.nodeBooleanValue(fieldNode)); + } else if (indexCreatedBeforeV2_0 && fieldName.equals("validate_lat")) { + if (builder.fieldType().ignoreMalformed == false) { + builder.fieldType().ignoreMalformed = !XContentMapValues.nodeBooleanValue(fieldNode); + } iterator.remove(); - } else if (fieldName.equals("normalize")) { - builder.fieldType().setNormalizeLat(XContentMapValues.nodeBooleanValue(fieldNode)); - builder.fieldType().setNormalizeLon(XContentMapValues.nodeBooleanValue(fieldNode)); + } else if (fieldName.equals(Names.COERCE)) { + builder.fieldType().coerce = XContentMapValues.nodeBooleanValue(fieldNode); + if (builder.fieldType().coerce == true) { + builder.fieldType().ignoreMalformed = true; + } iterator.remove(); - } else if (fieldName.equals("normalize_lat")) { - builder.fieldType().setNormalizeLat(XContentMapValues.nodeBooleanValue(fieldNode)); + } else if (indexCreatedBeforeV2_0 && fieldName.equals("normalize")) { + builder.fieldType().coerce = XContentMapValues.nodeBooleanValue(fieldNode); iterator.remove(); - } else if (fieldName.equals("normalize_lon")) { - builder.fieldType().setNormalizeLon(XContentMapValues.nodeBooleanValue(fieldNode)); + } else if (indexCreatedBeforeV2_0 && fieldName.equals("normalize_lat")) { + builder.fieldType().coerce = XContentMapValues.nodeBooleanValue(fieldNode); + iterator.remove(); + } else if (indexCreatedBeforeV2_0 && fieldName.equals("normalize_lon")) { + if (builder.fieldType().coerce == false) { + builder.fieldType().coerce = XContentMapValues.nodeBooleanValue(fieldNode); + } iterator.remove(); } else if (parseMultiField(builder, name, parserContext, fieldName, fieldNode)) { iterator.remove(); @@ -281,10 +300,8 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper private MappedFieldType latFieldType; private MappedFieldType lonFieldType; - private boolean validateLon = true; - private boolean validateLat = true; - private boolean normalizeLon = true; - private boolean normalizeLat = true; + private boolean ignoreMalformed = false; + private boolean coerce = false; public GeoPointFieldType() {} @@ -295,10 +312,8 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper this.geohashPrefixEnabled = ref.geohashPrefixEnabled; this.latFieldType = ref.latFieldType; // copying ref is ok, this can never be modified this.lonFieldType = ref.lonFieldType; // copying ref is ok, this can never be modified - this.validateLon = ref.validateLon; - this.validateLat = ref.validateLat; - this.normalizeLon = ref.normalizeLon; - this.normalizeLat = ref.normalizeLat; + this.coerce = ref.coerce; + this.ignoreMalformed = ref.ignoreMalformed; } @Override @@ -312,10 +327,8 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper GeoPointFieldType that = (GeoPointFieldType) o; return geohashPrecision == that.geohashPrecision && geohashPrefixEnabled == that.geohashPrefixEnabled && - validateLon == that.validateLon && - validateLat == that.validateLat && - normalizeLon == that.normalizeLon && - normalizeLat == that.normalizeLat && + coerce == that.coerce && + ignoreMalformed == that.ignoreMalformed && java.util.Objects.equals(geohashFieldType, that.geohashFieldType) && java.util.Objects.equals(latFieldType, that.latFieldType) && java.util.Objects.equals(lonFieldType, that.lonFieldType); @@ -323,7 +336,8 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper @Override public int hashCode() { - return java.util.Objects.hash(super.hashCode(), geohashFieldType, geohashPrecision, geohashPrefixEnabled, latFieldType, lonFieldType, validateLon, validateLat, normalizeLon, normalizeLat); + return java.util.Objects.hash(super.hashCode(), geohashFieldType, geohashPrecision, geohashPrefixEnabled, latFieldType, + lonFieldType, coerce, ignoreMalformed); } @Override @@ -347,22 +361,10 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper if (isGeohashPrefixEnabled() != other.isGeohashPrefixEnabled()) { conflicts.add("mapper [" + names().fullName() + "] has different geohash_prefix"); } - if (normalizeLat() != other.normalizeLat()) { - conflicts.add("mapper [" + names().fullName() + "] has different normalize_lat"); - } - if (normalizeLon() != other.normalizeLon()) { - conflicts.add("mapper [" + names().fullName() + "] has different normalize_lon"); - } - if (isLatLonEnabled() && + if (isLatLonEnabled() && other.isLatLonEnabled() && latFieldType().numericPrecisionStep() != other.latFieldType().numericPrecisionStep()) { conflicts.add("mapper [" + names().fullName() + "] has different precision_step"); } - if (validateLat() != other.validateLat()) { - conflicts.add("mapper [" + names().fullName() + "] has different validate_lat"); - } - if (validateLon() != other.validateLon()) { - conflicts.add("mapper [" + names().fullName() + "] has different validate_lon"); - } } public boolean isGeohashEnabled() { @@ -406,40 +408,22 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper this.lonFieldType = lonFieldType; } - public boolean validateLon() { - return validateLon; + public boolean coerce() { + return this.coerce; } - public void setValidateLon(boolean validateLon) { + public void setCoerce(boolean coerce) { checkIfFrozen(); - this.validateLon = validateLon; + this.coerce = coerce; } - public boolean validateLat() { - return validateLat; + public boolean ignoreMalformed() { + return this.ignoreMalformed; } - public void setValidateLat(boolean validateLat) { + public void setIgnoreMalformed(boolean ignoreMalformed) { checkIfFrozen(); - this.validateLat = validateLat; - } - - public boolean normalizeLon() { - return normalizeLon; - } - - public void setNormalizeLon(boolean normalizeLon) { - checkIfFrozen(); - this.normalizeLon = normalizeLon; - } - - public boolean normalizeLat() { - return normalizeLat; - } - - public void setNormalizeLat(boolean normalizeLat) { - checkIfFrozen(); - this.normalizeLat = normalizeLat; + this.ignoreMalformed = ignoreMalformed; } @Override @@ -586,7 +570,8 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper private final StringFieldMapper geohashMapper; public GeoPointFieldMapper(String simpleName, MappedFieldType fieldType, MappedFieldType defaultFieldType, Settings indexSettings, - ContentPath.Type pathType, DoubleFieldMapper latMapper, DoubleFieldMapper lonMapper, StringFieldMapper geohashMapper,MultiFields multiFields) { + ContentPath.Type pathType, DoubleFieldMapper latMapper, DoubleFieldMapper lonMapper, StringFieldMapper geohashMapper, + MultiFields multiFields) { super(simpleName, fieldType, defaultFieldType, indexSettings, multiFields, null); this.pathType = pathType; this.latMapper = latMapper; @@ -680,21 +665,22 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper } private void parse(ParseContext context, GeoPoint point, String geohash) throws IOException { - if (fieldType().normalizeLat() || fieldType().normalizeLon()) { - GeoUtils.normalizePoint(point, fieldType().normalizeLat(), fieldType().normalizeLon()); - } - - if (fieldType().validateLat()) { + if (fieldType().ignoreMalformed == false) { if (point.lat() > 90.0 || point.lat() < -90.0) { throw new IllegalArgumentException("illegal latitude value [" + point.lat() + "] for " + name()); } - } - if (fieldType().validateLon()) { if (point.lon() > 180.0 || point.lon() < -180) { throw new IllegalArgumentException("illegal longitude value [" + point.lon() + "] for " + name()); } } + if (fieldType().coerce) { + // by setting coerce to false we are assuming all geopoints are already in a valid coordinate system + // thus this extra step can be skipped + // LUCENE WATCH: This will be folded back into Lucene's GeoPointField + GeoUtils.normalizePoint(point, true, true); + } + if (fieldType().indexOptions() != IndexOptions.NONE || fieldType().stored()) { Field field = new Field(fieldType().names().indexName(), Double.toString(point.lat()) + ',' + Double.toString(point.lon()), fieldType()); context.doc().add(field); @@ -755,33 +741,11 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper if (fieldType().isLatLonEnabled() && (includeDefaults || fieldType().latFieldType().numericPrecisionStep() != NumericUtils.PRECISION_STEP_DEFAULT)) { builder.field("precision_step", fieldType().latFieldType().numericPrecisionStep()); } - if (includeDefaults || fieldType().validateLat() != Defaults.VALIDATE_LAT || fieldType().validateLon() != Defaults.VALIDATE_LON) { - if (fieldType().validateLat() && fieldType().validateLon()) { - builder.field("validate", true); - } else if (!fieldType().validateLat() && !fieldType().validateLon()) { - builder.field("validate", false); - } else { - if (includeDefaults || fieldType().validateLat() != Defaults.VALIDATE_LAT) { - builder.field("validate_lat", fieldType().validateLat()); - } - if (includeDefaults || fieldType().validateLon() != Defaults.VALIDATE_LON) { - builder.field("validate_lon", fieldType().validateLon()); - } - } + if (includeDefaults || fieldType().coerce != Defaults.COERCE) { + builder.field(Names.COERCE, fieldType().coerce); } - if (includeDefaults || fieldType().normalizeLat() != Defaults.NORMALIZE_LAT || fieldType().normalizeLon() != Defaults.NORMALIZE_LON) { - if (fieldType().normalizeLat() && fieldType().normalizeLon()) { - builder.field("normalize", true); - } else if (!fieldType().normalizeLat() && !fieldType().normalizeLon()) { - builder.field("normalize", false); - } else { - if (includeDefaults || fieldType().normalizeLat() != Defaults.NORMALIZE_LAT) { - builder.field("normalize_lat", fieldType().normalizeLat()); - } - if (includeDefaults || fieldType().normalizeLon() != Defaults.NORMALIZE_LON) { - builder.field("normalize_lon", fieldType().normalizeLon()); - } - } + if (includeDefaults || fieldType().ignoreMalformed != Defaults.IGNORE_MALFORMED) { + builder.field(Names.IGNORE_MALFORMED, fieldType().ignoreMalformed); } } @@ -812,5 +776,4 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper return new BytesRef(bytes); } } - } diff --git a/core/src/main/java/org/elasticsearch/index/query/GeoBoundingBoxQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/GeoBoundingBoxQueryBuilder.java index 9b376ca851e..99b348e9e55 100644 --- a/core/src/main/java/org/elasticsearch/index/query/GeoBoundingBoxQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/GeoBoundingBoxQueryBuilder.java @@ -41,6 +41,8 @@ public class GeoBoundingBoxQueryBuilder extends QueryBuilder { private String queryName; private String type; + private Boolean coerce; + private Boolean ignoreMalformed; public GeoBoundingBoxQueryBuilder(String name) { this.name = name; @@ -134,6 +136,16 @@ public class GeoBoundingBoxQueryBuilder extends QueryBuilder { return this; } + public GeoBoundingBoxQueryBuilder coerce(boolean coerce) { + this.coerce = coerce; + return this; + } + + public GeoBoundingBoxQueryBuilder ignoreMalformed(boolean ignoreMalformed) { + this.ignoreMalformed = ignoreMalformed; + return this; + } + /** * Sets the type of executing of the geo bounding box. Can be either `memory` or `indexed`. Defaults * to `memory`. @@ -169,6 +181,12 @@ public class GeoBoundingBoxQueryBuilder extends QueryBuilder { if (type != null) { builder.field("type", type); } + if (coerce != null) { + builder.field("coerce", coerce); + } + if (ignoreMalformed != null) { + builder.field("ignore_malformed", ignoreMalformed); + } builder.endObject(); } diff --git a/core/src/main/java/org/elasticsearch/index/query/GeoBoundingBoxQueryParser.java b/core/src/main/java/org/elasticsearch/index/query/GeoBoundingBoxQueryParser.java index 89012571a71..6dead6e94d4 100644 --- a/core/src/main/java/org/elasticsearch/index/query/GeoBoundingBoxQueryParser.java +++ b/core/src/main/java/org/elasticsearch/index/query/GeoBoundingBoxQueryParser.java @@ -21,12 +21,12 @@ package org.elasticsearch.index.query; import org.apache.lucene.search.Query; import org.elasticsearch.ElasticsearchParseException; +import org.elasticsearch.Version; import org.elasticsearch.common.geo.GeoPoint; import org.elasticsearch.common.geo.GeoUtils; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.fielddata.IndexGeoPointFieldData; -import org.elasticsearch.index.mapper.FieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.geo.GeoPointFieldMapper; import org.elasticsearch.index.search.geo.InMemoryGeoBoundingBoxQuery; @@ -81,7 +81,9 @@ public class GeoBoundingBoxQueryParser implements QueryParser { String queryName = null; String currentFieldName = null; XContentParser.Token token; - boolean normalize = true; + final boolean indexCreatedBeforeV2_0 = parseContext.indexVersionCreated().before(Version.V_2_0_0); + boolean coerce = false; + boolean ignoreMalformed = false; GeoPoint sparse = new GeoPoint(); @@ -137,10 +139,15 @@ public class GeoBoundingBoxQueryParser implements QueryParser { } else if (token.isValue()) { if ("_name".equals(currentFieldName)) { queryName = parser.text(); - } else if ("normalize".equals(currentFieldName)) { - normalize = parser.booleanValue(); + } else if ("coerce".equals(currentFieldName) || (indexCreatedBeforeV2_0 && "normalize".equals(currentFieldName))) { + coerce = parser.booleanValue(); + if (coerce == true) { + ignoreMalformed = true; + } } else if ("type".equals(currentFieldName)) { type = parser.text(); + } else if ("ignore_malformed".equals(currentFieldName) && coerce == false) { + ignoreMalformed = parser.booleanValue(); } else { throw new QueryParsingException(parseContext, "failed to parse [{}] query. unexpected field [{}]", NAME, currentFieldName); } @@ -150,8 +157,24 @@ public class GeoBoundingBoxQueryParser implements QueryParser { final GeoPoint topLeft = sparse.reset(top, left); //just keep the object final GeoPoint bottomRight = new GeoPoint(bottom, right); - if (normalize) { - // Special case: if the difference bettween the left and right is 360 and the right is greater than the left, we are asking for + // validation was not available prior to 2.x, so to support bwc percolation queries we only ignore_malformed on 2.x created indexes + if (!indexCreatedBeforeV2_0 && !ignoreMalformed) { + if (topLeft.lat() > 90.0 || topLeft.lat() < -90.0) { + throw new QueryParsingException(parseContext, "illegal latitude value [{}] for [{}]", topLeft.lat(), NAME); + } + if (topLeft.lon() > 180.0 || topLeft.lon() < -180) { + throw new QueryParsingException(parseContext, "illegal longitude value [{}] for [{}]", topLeft.lon(), NAME); + } + if (bottomRight.lat() > 90.0 || bottomRight.lat() < -90.0) { + throw new QueryParsingException(parseContext, "illegal latitude value [{}] for [{}]", bottomRight.lat(), NAME); + } + if (bottomRight.lon() > 180.0 || bottomRight.lon() < -180) { + throw new QueryParsingException(parseContext, "illegal longitude value [{}] for [{}]", bottomRight.lon(), NAME); + } + } + + if (coerce) { + // Special case: if the difference between the left and right is 360 and the right is greater than the left, we are asking for // the complete longitude range so need to set longitude to the complete longditude range boolean completeLonRange = ((right - left) % 360 == 0 && right > left); GeoUtils.normalizePoint(topLeft, true, !completeLonRange); diff --git a/core/src/main/java/org/elasticsearch/index/query/GeoDistanceQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/GeoDistanceQueryBuilder.java index 0995a5ecacf..77c8f944864 100644 --- a/core/src/main/java/org/elasticsearch/index/query/GeoDistanceQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/GeoDistanceQueryBuilder.java @@ -44,6 +44,10 @@ public class GeoDistanceQueryBuilder extends QueryBuilder { private String queryName; + private Boolean coerce; + + private Boolean ignoreMalformed; + public GeoDistanceQueryBuilder(String name) { this.name = name; } @@ -97,6 +101,16 @@ public class GeoDistanceQueryBuilder extends QueryBuilder { return this; } + public GeoDistanceQueryBuilder coerce(boolean coerce) { + this.coerce = coerce; + return this; + } + + public GeoDistanceQueryBuilder ignoreMalformed(boolean ignoreMalformed) { + this.ignoreMalformed = ignoreMalformed; + return this; + } + @Override protected void doXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(GeoDistanceQueryParser.NAME); @@ -115,6 +129,12 @@ public class GeoDistanceQueryBuilder extends QueryBuilder { if (queryName != null) { builder.field("_name", queryName); } + if (coerce != null) { + builder.field("coerce", coerce); + } + if (ignoreMalformed != null) { + builder.field("ignore_malformed", ignoreMalformed); + } builder.endObject(); } } diff --git a/core/src/main/java/org/elasticsearch/index/query/GeoDistanceQueryParser.java b/core/src/main/java/org/elasticsearch/index/query/GeoDistanceQueryParser.java index b78562567ec..82013818810 100644 --- a/core/src/main/java/org/elasticsearch/index/query/GeoDistanceQueryParser.java +++ b/core/src/main/java/org/elasticsearch/index/query/GeoDistanceQueryParser.java @@ -20,6 +20,7 @@ package org.elasticsearch.index.query; import org.apache.lucene.search.Query; +import org.elasticsearch.Version; import org.elasticsearch.common.geo.GeoDistance; import org.elasticsearch.common.geo.GeoHashUtils; import org.elasticsearch.common.geo.GeoPoint; @@ -28,7 +29,6 @@ import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.unit.DistanceUnit; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.fielddata.IndexGeoPointFieldData; -import org.elasticsearch.index.mapper.FieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.geo.GeoPointFieldMapper; import org.elasticsearch.index.search.geo.GeoDistanceRangeQuery; @@ -71,8 +71,9 @@ public class GeoDistanceQueryParser implements QueryParser { DistanceUnit unit = DistanceUnit.DEFAULT; GeoDistance geoDistance = GeoDistance.DEFAULT; String optimizeBbox = "memory"; - boolean normalizeLon = true; - boolean normalizeLat = true; + final boolean indexCreatedBeforeV2_0 = parseContext.indexVersionCreated().before(Version.V_2_0_0); + boolean coerce = false; + boolean ignoreMalformed = false; while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); @@ -125,9 +126,13 @@ public class GeoDistanceQueryParser implements QueryParser { queryName = parser.text(); } else if ("optimize_bbox".equals(currentFieldName) || "optimizeBbox".equals(currentFieldName)) { optimizeBbox = parser.textOrNull(); - } else if ("normalize".equals(currentFieldName)) { - normalizeLat = parser.booleanValue(); - normalizeLon = parser.booleanValue(); + } else if ("coerce".equals(currentFieldName) || (indexCreatedBeforeV2_0 && "normalize".equals(currentFieldName))) { + coerce = parser.booleanValue(); + if (coerce == true) { + ignoreMalformed = true; + } + } else if ("ignore_malformed".equals(currentFieldName) && coerce == false) { + ignoreMalformed = parser.booleanValue(); } else { point.resetFromString(parser.text()); fieldName = currentFieldName; @@ -135,6 +140,20 @@ public class GeoDistanceQueryParser implements QueryParser { } } + // validation was not available prior to 2.x, so to support bwc percolation queries we only ignore_malformed on 2.x created indexes + if (!indexCreatedBeforeV2_0 && !ignoreMalformed) { + if (point.lat() > 90.0 || point.lat() < -90.0) { + throw new QueryParsingException(parseContext, "illegal latitude value [{}] for [{}]", point.lat(), NAME); + } + if (point.lon() > 180.0 || point.lon() < -180) { + throw new QueryParsingException(parseContext, "illegal longitude value [{}] for [{}]", point.lon(), NAME); + } + } + + if (coerce) { + GeoUtils.normalizePoint(point, coerce, coerce); + } + if (vDistance == null) { throw new QueryParsingException(parseContext, "geo_distance requires 'distance' to be specified"); } else if (vDistance instanceof Number) { @@ -144,10 +163,6 @@ public class GeoDistanceQueryParser implements QueryParser { } distance = geoDistance.normalize(distance, DistanceUnit.DEFAULT); - if (normalizeLat || normalizeLon) { - GeoUtils.normalizePoint(point, normalizeLat, normalizeLon); - } - MappedFieldType fieldType = parseContext.fieldMapper(fieldName); if (fieldType == null) { throw new QueryParsingException(parseContext, "failed to find geo_point field [" + fieldName + "]"); diff --git a/core/src/main/java/org/elasticsearch/index/query/GeoDistanceRangeQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/GeoDistanceRangeQueryBuilder.java index d21810f97e3..6aa6f0fd9d7 100644 --- a/core/src/main/java/org/elasticsearch/index/query/GeoDistanceRangeQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/GeoDistanceRangeQueryBuilder.java @@ -46,6 +46,10 @@ public class GeoDistanceRangeQueryBuilder extends QueryBuilder { private String optimizeBbox; + private Boolean coerce; + + private Boolean ignoreMalformed; + public GeoDistanceRangeQueryBuilder(String name) { this.name = name; } @@ -125,6 +129,16 @@ public class GeoDistanceRangeQueryBuilder extends QueryBuilder { return this; } + public GeoDistanceRangeQueryBuilder coerce(boolean coerce) { + this.coerce = coerce; + return this; + } + + public GeoDistanceRangeQueryBuilder ignoreMalformed(boolean ignoreMalformed) { + this.ignoreMalformed = ignoreMalformed; + return this; + } + /** * Sets the filter name for the filter that can be used when searching for matched_filters per hit. */ @@ -154,6 +168,12 @@ public class GeoDistanceRangeQueryBuilder extends QueryBuilder { if (queryName != null) { builder.field("_name", queryName); } + if (coerce != null) { + builder.field("coerce", coerce); + } + if (ignoreMalformed != null) { + builder.field("ignore_malformed", ignoreMalformed); + } builder.endObject(); } } diff --git a/core/src/main/java/org/elasticsearch/index/query/GeoDistanceRangeQueryParser.java b/core/src/main/java/org/elasticsearch/index/query/GeoDistanceRangeQueryParser.java index 6c8479bee82..f60d9447e7e 100644 --- a/core/src/main/java/org/elasticsearch/index/query/GeoDistanceRangeQueryParser.java +++ b/core/src/main/java/org/elasticsearch/index/query/GeoDistanceRangeQueryParser.java @@ -20,6 +20,7 @@ package org.elasticsearch.index.query; import org.apache.lucene.search.Query; +import org.elasticsearch.Version; import org.elasticsearch.common.geo.GeoDistance; import org.elasticsearch.common.geo.GeoHashUtils; import org.elasticsearch.common.geo.GeoPoint; @@ -28,7 +29,6 @@ import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.unit.DistanceUnit; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.fielddata.IndexGeoPointFieldData; -import org.elasticsearch.index.mapper.FieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.geo.GeoPointFieldMapper; import org.elasticsearch.index.search.geo.GeoDistanceRangeQuery; @@ -73,8 +73,9 @@ public class GeoDistanceRangeQueryParser implements QueryParser { DistanceUnit unit = DistanceUnit.DEFAULT; GeoDistance geoDistance = GeoDistance.DEFAULT; String optimizeBbox = "memory"; - boolean normalizeLon = true; - boolean normalizeLat = true; + final boolean indexCreatedBeforeV2_0 = parseContext.indexVersionCreated().before(Version.V_2_0_0); + boolean coerce = false; + boolean ignoreMalformed = false; while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); @@ -155,9 +156,13 @@ public class GeoDistanceRangeQueryParser implements QueryParser { queryName = parser.text(); } else if ("optimize_bbox".equals(currentFieldName) || "optimizeBbox".equals(currentFieldName)) { optimizeBbox = parser.textOrNull(); - } else if ("normalize".equals(currentFieldName)) { - normalizeLat = parser.booleanValue(); - normalizeLon = parser.booleanValue(); + } else if ("coerce".equals(currentFieldName) || (indexCreatedBeforeV2_0 && "normalize".equals(currentFieldName))) { + coerce = parser.booleanValue(); + if (coerce == true) { + ignoreMalformed = true; + } + } else if ("ignore_malformed".equals(currentFieldName) && coerce == false) { + ignoreMalformed = parser.booleanValue(); } else { point.resetFromString(parser.text()); fieldName = currentFieldName; @@ -165,6 +170,20 @@ public class GeoDistanceRangeQueryParser implements QueryParser { } } + // validation was not available prior to 2.x, so to support bwc percolation queries we only ignore_malformed on 2.x created indexes + if (!indexCreatedBeforeV2_0 && !ignoreMalformed) { + if (point.lat() > 90.0 || point.lat() < -90.0) { + throw new QueryParsingException(parseContext, "illegal latitude value [{}] for [{}]", point.lat(), NAME); + } + if (point.lon() > 180.0 || point.lon() < -180) { + throw new QueryParsingException(parseContext, "illegal longitude value [{}] for [{}]", point.lon(), NAME); + } + } + + if (coerce) { + GeoUtils.normalizePoint(point, coerce, coerce); + } + Double from = null; Double to = null; if (vFrom != null) { @@ -184,10 +203,6 @@ public class GeoDistanceRangeQueryParser implements QueryParser { to = geoDistance.normalize(to, DistanceUnit.DEFAULT); } - if (normalizeLat || normalizeLon) { - GeoUtils.normalizePoint(point, normalizeLat, normalizeLon); - } - MappedFieldType fieldType = parseContext.fieldMapper(fieldName); if (fieldType == null) { throw new QueryParsingException(parseContext, "failed to find geo_point field [" + fieldName + "]"); diff --git a/core/src/main/java/org/elasticsearch/index/query/GeoPolygonQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/GeoPolygonQueryBuilder.java index 4fd2f4153d9..27d14ecb137 100644 --- a/core/src/main/java/org/elasticsearch/index/query/GeoPolygonQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/GeoPolygonQueryBuilder.java @@ -38,6 +38,10 @@ public class GeoPolygonQueryBuilder extends QueryBuilder { private String queryName; + private Boolean coerce; + + private Boolean ignoreMalformed; + public GeoPolygonQueryBuilder(String name) { this.name = name; } @@ -70,6 +74,16 @@ public class GeoPolygonQueryBuilder extends QueryBuilder { return this; } + public GeoPolygonQueryBuilder coerce(boolean coerce) { + this.coerce = coerce; + return this; + } + + public GeoPolygonQueryBuilder ignoreMalformed(boolean ignoreMalformed) { + this.ignoreMalformed = ignoreMalformed; + return this; + } + @Override protected void doXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(GeoPolygonQueryParser.NAME); @@ -85,6 +99,12 @@ public class GeoPolygonQueryBuilder extends QueryBuilder { if (queryName != null) { builder.field("_name", queryName); } + if (coerce != null) { + builder.field("coerce", coerce); + } + if (ignoreMalformed != null) { + builder.field("ignore_malformed", ignoreMalformed); + } builder.endObject(); } diff --git a/core/src/main/java/org/elasticsearch/index/query/GeoPolygonQueryParser.java b/core/src/main/java/org/elasticsearch/index/query/GeoPolygonQueryParser.java index 43d368666e1..00401bc8ca8 100644 --- a/core/src/main/java/org/elasticsearch/index/query/GeoPolygonQueryParser.java +++ b/core/src/main/java/org/elasticsearch/index/query/GeoPolygonQueryParser.java @@ -22,13 +22,13 @@ package org.elasticsearch.index.query; import com.google.common.collect.Lists; import org.apache.lucene.search.Query; +import org.elasticsearch.Version; import org.elasticsearch.common.geo.GeoPoint; import org.elasticsearch.common.geo.GeoUtils; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentParser.Token; import org.elasticsearch.index.fielddata.IndexGeoPointFieldData; -import org.elasticsearch.index.mapper.FieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.geo.GeoPointFieldMapper; import org.elasticsearch.index.search.geo.GeoPolygonQuery; @@ -70,9 +70,9 @@ public class GeoPolygonQueryParser implements QueryParser { List shell = Lists.newArrayList(); - boolean normalizeLon = true; - boolean normalizeLat = true; - + final boolean indexCreatedBeforeV2_0 = parseContext.indexVersionCreated().before(Version.V_2_0_0); + boolean coerce = false; + boolean ignoreMalformed = false; String queryName = null; String currentFieldName = null; XContentParser.Token token; @@ -108,9 +108,13 @@ public class GeoPolygonQueryParser implements QueryParser { } else if (token.isValue()) { if ("_name".equals(currentFieldName)) { queryName = parser.text(); - } else if ("normalize".equals(currentFieldName)) { - normalizeLat = parser.booleanValue(); - normalizeLon = parser.booleanValue(); + } else if ("coerce".equals(currentFieldName) || (indexCreatedBeforeV2_0 && "normalize".equals(currentFieldName))) { + coerce = parser.booleanValue(); + if (coerce == true) { + ignoreMalformed = true; + } + } else if ("ignore_malformed".equals(currentFieldName) && coerce == false) { + ignoreMalformed = parser.booleanValue(); } else { throw new QueryParsingException(parseContext, "[geo_polygon] query does not support [" + currentFieldName + "]"); } @@ -134,9 +138,21 @@ public class GeoPolygonQueryParser implements QueryParser { } } - if (normalizeLat || normalizeLon) { + // validation was not available prior to 2.x, so to support bwc percolation queries we only ignore_malformed on 2.x created indexes + if (!indexCreatedBeforeV2_0 && !ignoreMalformed) { for (GeoPoint point : shell) { - GeoUtils.normalizePoint(point, normalizeLat, normalizeLon); + if (point.lat() > 90.0 || point.lat() < -90.0) { + throw new QueryParsingException(parseContext, "illegal latitude value [{}] for [{}]", point.lat(), NAME); + } + if (point.lon() > 180.0 || point.lon() < -180) { + throw new QueryParsingException(parseContext, "illegal longitude value [{}] for [{}]", point.lon(), NAME); + } + } + } + + if (coerce) { + for (GeoPoint point : shell) { + GeoUtils.normalizePoint(point, coerce, coerce); } } diff --git a/core/src/main/java/org/elasticsearch/index/query/HasChildQueryParser.java b/core/src/main/java/org/elasticsearch/index/query/HasChildQueryParser.java index c5388b16557..d911005eb89 100644 --- a/core/src/main/java/org/elasticsearch/index/query/HasChildQueryParser.java +++ b/core/src/main/java/org/elasticsearch/index/query/HasChildQueryParser.java @@ -258,6 +258,7 @@ public class HasChildQueryParser implements QueryParser { String joinField = ParentFieldMapper.joinField(parentType); IndexReader indexReader = searchContext.searcher().getIndexReader(); IndexSearcher indexSearcher = new IndexSearcher(indexReader); + indexSearcher.setQueryCache(null); IndexParentChildFieldData indexParentChildFieldData = parentChildIndexFieldData.loadGlobal(indexReader); MultiDocValues.OrdinalMap ordinalMap = ParentChildIndexFieldData.getOrdinalMap(indexParentChildFieldData, parentType); return JoinUtil.createJoinQuery(joinField, innerQuery, toQuery, indexSearcher, scoreMode, ordinalMap, minChildren, maxChildren); diff --git a/core/src/main/java/org/elasticsearch/index/query/NotQueryParser.java b/core/src/main/java/org/elasticsearch/index/query/NotQueryParser.java index 68ffe435d77..6bfe4c7843a 100644 --- a/core/src/main/java/org/elasticsearch/index/query/NotQueryParser.java +++ b/core/src/main/java/org/elasticsearch/index/query/NotQueryParser.java @@ -68,10 +68,6 @@ public class NotQueryParser implements QueryParser { // its the filter, and the name is the field query = parseContext.parseInnerFilter(currentFieldName); } - } else if (token == XContentParser.Token.START_ARRAY) { - queryFound = true; - // its the filter, and the name is the field - query = parseContext.parseInnerFilter(currentFieldName); } else if (token.isValue()) { if ("_name".equals(currentFieldName)) { queryName = parser.text(); diff --git a/core/src/main/java/org/elasticsearch/index/query/TermsQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/TermsQueryBuilder.java index 9ffdb0c647e..ca54eb3b3d3 100644 --- a/core/src/main/java/org/elasticsearch/index/query/TermsQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/TermsQueryBuilder.java @@ -38,8 +38,6 @@ public class TermsQueryBuilder extends QueryBuilder implements BoostableQueryBui private String queryName; - private String execution; - private float boost = -1; /** @@ -118,17 +116,6 @@ public class TermsQueryBuilder extends QueryBuilder implements BoostableQueryBui this.values = values; } - /** - * Sets the execution mode for the terms filter. Cane be either "plain", "bool" - * "and". Defaults to "plain". - * @deprecated elasticsearch now makes better decisions on its own - */ - @Deprecated - public TermsQueryBuilder execution(String execution) { - this.execution = execution; - return this; - } - /** * Sets the minimum number of matches across the provided terms. Defaults to 1. * @deprecated use [bool] query instead @@ -168,10 +155,6 @@ public class TermsQueryBuilder extends QueryBuilder implements BoostableQueryBui builder.startObject(TermsQueryParser.NAME); builder.field(name, values); - if (execution != null) { - builder.field("execution", execution); - } - if (minimumShouldMatch != null) { builder.field("minimum_should_match", minimumShouldMatch); } diff --git a/core/src/main/java/org/elasticsearch/index/query/TermsQueryParser.java b/core/src/main/java/org/elasticsearch/index/query/TermsQueryParser.java index fa643892e45..db2fed37226 100644 --- a/core/src/main/java/org/elasticsearch/index/query/TermsQueryParser.java +++ b/core/src/main/java/org/elasticsearch/index/query/TermsQueryParser.java @@ -52,11 +52,9 @@ public class TermsQueryParser implements QueryParser { public static final String NAME = "terms"; private static final ParseField MIN_SHOULD_MATCH_FIELD = new ParseField("min_match", "min_should_match").withAllDeprecated("Use [bool] query instead"); private static final ParseField DISABLE_COORD_FIELD = new ParseField("disable_coord").withAllDeprecated("Use [bool] query instead"); + private static final ParseField EXECUTION_FIELD = new ParseField("execution").withAllDeprecated("execution is deprecated and has no effect"); private Client client; - @Deprecated - public static final String EXECUTION_KEY = "execution"; - @Inject public TermsQueryParser() { } @@ -141,7 +139,7 @@ public class TermsQueryParser implements QueryParser { throw new QueryParsingException(parseContext, "[terms] query lookup element requires specifying the path"); } } else if (token.isValue()) { - if (EXECUTION_KEY.equals(currentFieldName)) { + if (parseContext.parseFieldMatcher().match(currentFieldName, EXECUTION_FIELD)) { // ignore } else if (parseContext.parseFieldMatcher().match(currentFieldName, MIN_SHOULD_MATCH_FIELD)) { if (minShouldMatch != null) { diff --git a/core/src/main/java/org/elasticsearch/indices/IndicesModule.java b/core/src/main/java/org/elasticsearch/indices/IndicesModule.java index 4e8946b5339..759c9e5e150 100644 --- a/core/src/main/java/org/elasticsearch/indices/IndicesModule.java +++ b/core/src/main/java/org/elasticsearch/indices/IndicesModule.java @@ -19,15 +19,17 @@ package org.elasticsearch.indices; -import com.google.common.collect.ImmutableList; - +import org.apache.lucene.analysis.hunspell.Dictionary; import org.elasticsearch.action.update.UpdateHelper; import org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService; +import org.elasticsearch.common.geo.ShapesAvailability; import org.elasticsearch.common.inject.AbstractModule; -import org.elasticsearch.common.inject.Module; -import org.elasticsearch.common.inject.SpawnModules; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.indices.analysis.IndicesAnalysisModule; +import org.elasticsearch.common.util.ExtensionPoint; +import org.elasticsearch.index.query.*; +import org.elasticsearch.index.query.functionscore.FunctionScoreQueryParser; +import org.elasticsearch.indices.analysis.HunspellService; +import org.elasticsearch.indices.analysis.IndicesAnalysisService; import org.elasticsearch.indices.cache.query.IndicesQueryCache; import org.elasticsearch.indices.cache.request.IndicesRequestCache; import org.elasticsearch.indices.cluster.IndicesClusterStateService; @@ -35,7 +37,7 @@ import org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache; import org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCacheListener; import org.elasticsearch.indices.flush.SyncedFlushService; import org.elasticsearch.indices.memory.IndexingMemoryController; -import org.elasticsearch.indices.query.IndicesQueriesModule; +import org.elasticsearch.indices.query.IndicesQueriesRegistry; import org.elasticsearch.indices.recovery.RecoverySettings; import org.elasticsearch.indices.recovery.RecoverySource; import org.elasticsearch.indices.recovery.RecoveryTarget; @@ -44,27 +46,95 @@ import org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData; import org.elasticsearch.indices.ttl.IndicesTTLService; /** - * + * Configures classes and services that are shared by indices on each node. */ -public class IndicesModule extends AbstractModule implements SpawnModules { +public class IndicesModule extends AbstractModule { private final Settings settings; + private final ExtensionPoint.ClassSet queryParsers + = new ExtensionPoint.ClassSet<>("query_parser", QueryParser.class); + private final ExtensionPoint.InstanceMap hunspellDictionaries + = new ExtensionPoint.InstanceMap<>("hunspell_dictionary", String.class, Dictionary.class); + public IndicesModule(Settings settings) { this.settings = settings; + registerBuiltinQueryParsers(); + } + + private void registerBuiltinQueryParsers() { + registerQueryParser(MatchQueryParser.class); + registerQueryParser(MultiMatchQueryParser.class); + registerQueryParser(NestedQueryParser.class); + registerQueryParser(HasChildQueryParser.class); + registerQueryParser(HasParentQueryParser.class); + registerQueryParser(DisMaxQueryParser.class); + registerQueryParser(IdsQueryParser.class); + registerQueryParser(MatchAllQueryParser.class); + registerQueryParser(QueryStringQueryParser.class); + registerQueryParser(BoostingQueryParser.class); + registerQueryParser(BoolQueryParser.class); + registerQueryParser(TermQueryParser.class); + registerQueryParser(TermsQueryParser.class); + registerQueryParser(FuzzyQueryParser.class); + registerQueryParser(RegexpQueryParser.class); + registerQueryParser(RangeQueryParser.class); + registerQueryParser(PrefixQueryParser.class); + registerQueryParser(WildcardQueryParser.class); + registerQueryParser(FilteredQueryParser.class); + registerQueryParser(ConstantScoreQueryParser.class); + registerQueryParser(SpanTermQueryParser.class); + registerQueryParser(SpanNotQueryParser.class); + registerQueryParser(SpanWithinQueryParser.class); + registerQueryParser(SpanContainingQueryParser.class); + registerQueryParser(FieldMaskingSpanQueryParser.class); + registerQueryParser(SpanFirstQueryParser.class); + registerQueryParser(SpanNearQueryParser.class); + registerQueryParser(SpanOrQueryParser.class); + registerQueryParser(MoreLikeThisQueryParser.class); + registerQueryParser(WrapperQueryParser.class); + registerQueryParser(IndicesQueryParser.class); + registerQueryParser(CommonTermsQueryParser.class); + registerQueryParser(SpanMultiTermQueryParser.class); + registerQueryParser(FunctionScoreQueryParser.class); + registerQueryParser(SimpleQueryStringParser.class); + registerQueryParser(TemplateQueryParser.class); + registerQueryParser(TypeQueryParser.class); + registerQueryParser(LimitQueryParser.class); + registerQueryParser(ScriptQueryParser.class); + registerQueryParser(GeoDistanceQueryParser.class); + registerQueryParser(GeoDistanceRangeQueryParser.class); + registerQueryParser(GeoBoundingBoxQueryParser.class); + registerQueryParser(GeohashCellQuery.Parser.class); + registerQueryParser(GeoPolygonQueryParser.class); + registerQueryParser(QueryFilterParser.class); + registerQueryParser(FQueryFilterParser.class); + registerQueryParser(AndQueryParser.class); + registerQueryParser(OrQueryParser.class); + registerQueryParser(NotQueryParser.class); + registerQueryParser(ExistsQueryParser.class); + registerQueryParser(MissingQueryParser.class); + + if (ShapesAvailability.JTS_AVAILABLE) { + registerQueryParser(GeoShapeQueryParser.class); + } } - @Override - public Iterable extends Module> spawnModules() { - return ImmutableList.of(new IndicesQueriesModule(), new IndicesAnalysisModule()); + public void registerQueryParser(Class extends QueryParser> queryParser) { + queryParsers.registerExtension(queryParser); + } + + public void registerHunspellDictionary(String name, Dictionary dictionary) { + hunspellDictionaries.registerExtension(name, dictionary); } @Override protected void configure() { + bindQueryParsersExtension(); + bindHunspellExtension(); + bind(IndicesLifecycle.class).to(InternalIndicesLifecycle.class).asEagerSingleton(); - bind(IndicesService.class).asEagerSingleton(); - bind(RecoverySettings.class).asEagerSingleton(); bind(RecoveryTarget.class).asEagerSingleton(); bind(RecoverySource.class).asEagerSingleton(); @@ -80,7 +150,17 @@ public class IndicesModule extends AbstractModule implements SpawnModules { bind(IndicesWarmer.class).asEagerSingleton(); bind(UpdateHelper.class).asEagerSingleton(); bind(MetaDataIndexUpgradeService.class).asEagerSingleton(); - bind(IndicesFieldDataCacheListener.class).asEagerSingleton(); } + + protected void bindQueryParsersExtension() { + queryParsers.bind(binder()); + bind(IndicesQueriesRegistry.class).asEagerSingleton(); + } + + protected void bindHunspellExtension() { + hunspellDictionaries.bind(binder()); + bind(HunspellService.class).asEagerSingleton(); + bind(IndicesAnalysisService.class).asEagerSingleton(); + } } diff --git a/core/src/main/java/org/elasticsearch/indices/analysis/IndicesAnalysisModule.java b/core/src/main/java/org/elasticsearch/indices/analysis/IndicesAnalysisModule.java deleted file mode 100644 index 5c5de5f4356..00000000000 --- a/core/src/main/java/org/elasticsearch/indices/analysis/IndicesAnalysisModule.java +++ /dev/null @@ -1,47 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.indices.analysis; - -import com.google.common.collect.Maps; -import org.apache.lucene.analysis.hunspell.Dictionary; -import org.elasticsearch.common.inject.AbstractModule; -import org.elasticsearch.common.inject.multibindings.MapBinder; - -import java.util.Map; - -public class IndicesAnalysisModule extends AbstractModule { - - private final Map hunspellDictionaries = Maps.newHashMap(); - - public void addHunspellDictionary(String lang, Dictionary dictionary) { - hunspellDictionaries.put(lang, dictionary); - } - - @Override - protected void configure() { - bind(IndicesAnalysisService.class).asEagerSingleton(); - - MapBinder dictionariesBinder = MapBinder.newMapBinder(binder(), String.class, Dictionary.class); - for (Map.Entry entry : hunspellDictionaries.entrySet()) { - dictionariesBinder.addBinding(entry.getKey()).toInstance(entry.getValue()); - } - bind(HunspellService.class).asEagerSingleton(); - } -} \ No newline at end of file diff --git a/core/src/main/java/org/elasticsearch/indices/query/IndicesQueriesModule.java b/core/src/main/java/org/elasticsearch/indices/query/IndicesQueriesModule.java deleted file mode 100644 index fb7ca1784e3..00000000000 --- a/core/src/main/java/org/elasticsearch/indices/query/IndicesQueriesModule.java +++ /dev/null @@ -1,105 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.indices.query; - -import com.google.common.collect.Sets; -import org.elasticsearch.common.geo.ShapesAvailability; -import org.elasticsearch.common.inject.AbstractModule; -import org.elasticsearch.common.inject.multibindings.Multibinder; -import org.elasticsearch.index.query.*; -import org.elasticsearch.index.query.functionscore.FunctionScoreQueryParser; - -import java.util.Set; - -public class IndicesQueriesModule extends AbstractModule { - - private Set > queryParsersClasses = Sets.newHashSet(); - - public synchronized IndicesQueriesModule addQuery(Class extends QueryParser> queryParser) { - queryParsersClasses.add(queryParser); - return this; - } - - @Override - protected void configure() { - bind(IndicesQueriesRegistry.class).asEagerSingleton(); - - Multibinder qpBinders = Multibinder.newSetBinder(binder(), QueryParser.class); - for (Class extends QueryParser> queryParser : queryParsersClasses) { - qpBinders.addBinding().to(queryParser).asEagerSingleton(); - } - qpBinders.addBinding().to(MatchQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(MultiMatchQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(NestedQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(HasChildQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(HasParentQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(DisMaxQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(IdsQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(MatchAllQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(QueryStringQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(BoostingQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(BoolQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(TermQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(TermsQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(FuzzyQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(RegexpQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(RangeQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(PrefixQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(WildcardQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(FilteredQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(ConstantScoreQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(SpanTermQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(SpanNotQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(SpanWithinQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(SpanContainingQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(FieldMaskingSpanQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(SpanFirstQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(SpanNearQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(SpanOrQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(MoreLikeThisQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(WrapperQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(IndicesQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(CommonTermsQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(SpanMultiTermQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(FunctionScoreQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(SimpleQueryStringParser.class).asEagerSingleton(); - qpBinders.addBinding().to(TemplateQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(TypeQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(LimitQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(TermsQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(ScriptQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(GeoDistanceQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(GeoDistanceRangeQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(GeoBoundingBoxQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(GeohashCellQuery.Parser.class).asEagerSingleton(); - qpBinders.addBinding().to(GeoPolygonQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(QueryFilterParser.class).asEagerSingleton(); - qpBinders.addBinding().to(FQueryFilterParser.class).asEagerSingleton(); - qpBinders.addBinding().to(AndQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(OrQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(NotQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(ExistsQueryParser.class).asEagerSingleton(); - qpBinders.addBinding().to(MissingQueryParser.class).asEagerSingleton(); - - if (ShapesAvailability.JTS_AVAILABLE) { - qpBinders.addBinding().to(GeoShapeQueryParser.class).asEagerSingleton(); - } - } -} diff --git a/core/src/main/java/org/elasticsearch/indices/recovery/RecoverySourceHandler.java b/core/src/main/java/org/elasticsearch/indices/recovery/RecoverySourceHandler.java index 295ab49ac7f..9d0439dc167 100644 --- a/core/src/main/java/org/elasticsearch/indices/recovery/RecoverySourceHandler.java +++ b/core/src/main/java/org/elasticsearch/indices/recovery/RecoverySourceHandler.java @@ -293,7 +293,7 @@ public class RecoverySourceHandler { store.incRef(); final StoreFileMetaData md = recoverySourceMetadata.get(name); try (final IndexInput indexInput = store.directory().openInput(name, IOContext.READONCE)) { - final int BUFFER_SIZE = (int) recoverySettings.fileChunkSize().bytes(); + final int BUFFER_SIZE = (int) Math.max(1, recoverySettings.fileChunkSize().bytes()); // at least one! final byte[] buf = new byte[BUFFER_SIZE]; boolean shouldCompressRequest = recoverySettings.compress(); if (CompressorFactory.isCompressed(indexInput)) { diff --git a/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryStatus.java b/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryStatus.java index b4d5bf6471e..cef24543917 100644 --- a/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryStatus.java +++ b/core/src/main/java/org/elasticsearch/indices/recovery/RecoveryStatus.java @@ -226,6 +226,9 @@ public class RecoveryStatus extends AbstractRefCounted { public IndexOutput openAndPutIndexOutput(String fileName, StoreFileMetaData metaData, Store store) throws IOException { ensureRefCount(); String tempFileName = getTempNameForFile(fileName); + if (tempFileNames.containsKey(tempFileName)) { + throw new IllegalStateException("output for file [" + fileName + "] has already been created"); + } // add first, before it's created tempFileNames.put(tempFileName, fileName); IndexOutput indexOutput = store.createVerifyingOutput(tempFileName, metaData, IOContext.DEFAULT); diff --git a/core/src/main/java/org/elasticsearch/node/service/NodeService.java b/core/src/main/java/org/elasticsearch/node/service/NodeService.java index 5be6bf11e56..19647539b61 100644 --- a/core/src/main/java/org/elasticsearch/node/service/NodeService.java +++ b/core/src/main/java/org/elasticsearch/node/service/NodeService.java @@ -119,7 +119,7 @@ public class NodeService extends AbstractComponent { } public NodeInfo info(boolean settings, boolean os, boolean process, boolean jvm, boolean threadPool, - boolean network, boolean transport, boolean http, boolean plugin) { + boolean transport, boolean http, boolean plugin) { return new NodeInfo(version, Build.CURRENT, discovery.localNode(), serviceAttributes, settings ? this.settings : null, os ? monitorService.osService().info() : null, @@ -149,7 +149,7 @@ public class NodeService extends AbstractComponent { ); } - public NodeStats stats(CommonStatsFlags indices, boolean os, boolean process, boolean jvm, boolean threadPool, boolean network, + public NodeStats stats(CommonStatsFlags indices, boolean os, boolean process, boolean jvm, boolean threadPool, boolean fs, boolean transport, boolean http, boolean circuitBreaker, boolean script) { // for indices stats we want to include previous allocated shards stats as well (it will diff --git a/core/src/main/java/org/elasticsearch/plugins/PluginManager.java b/core/src/main/java/org/elasticsearch/plugins/PluginManager.java index fd140ec3303..fbd5b74ee1d 100644 --- a/core/src/main/java/org/elasticsearch/plugins/PluginManager.java +++ b/core/src/main/java/org/elasticsearch/plugins/PluginManager.java @@ -75,18 +75,19 @@ public class PluginManager { static final ImmutableSet OFFICIAL_PLUGINS = ImmutableSet. builder() .add( - "elasticsearch-analysis-icu", - "elasticsearch-analysis-kuromoji", - "elasticsearch-analysis-phonetic", - "elasticsearch-analysis-smartcn", - "elasticsearch-analysis-stempel", - "elasticsearch-cloud-aws", - "elasticsearch-cloud-azure", - "elasticsearch-cloud-gce", - "elasticsearch-delete-by-query", - "elasticsearch-lang-javascript", - "elasticsearch-lang-python", - "elasticsearch-mapper-size" + "analysis-icu", + "analysis-kuromoji", + "analysis-phonetic", + "analysis-smartcn", + "analysis-stempel", + "cloud-aws", + "cloud-azure", + "cloud-gce", + "delete-by-query", + "lang-javascript", + "lang-python", + "mapper-murmur3", + "mapper-size" ).build(); private final Environment environment; @@ -162,7 +163,7 @@ public class PluginManager { terminal.println("Failed: %s", ExceptionsHelper.detailedMessage(e)); } } else { - if (PluginHandle.isOfficialPlugin(pluginHandle.repo, pluginHandle.user, pluginHandle.version)) { + if (PluginHandle.isOfficialPlugin(pluginHandle.name, pluginHandle.user, pluginHandle.version)) { checkForOfficialPlugins(pluginHandle.name); } } @@ -437,43 +438,41 @@ public class PluginManager { */ static class PluginHandle { - final String name; final String version; final String user; - final String repo; + final String name; - PluginHandle(String name, String version, String user, String repo) { - this.name = name; + PluginHandle(String name, String version, String user) { this.version = version; this.user = user; - this.repo = repo; + this.name = name; } List urls() { List urls = new ArrayList<>(); if (version != null) { - // Elasticsearch new download service uses groupId org.elasticsearch.plugins from 2.0.0 + // Elasticsearch new download service uses groupId org.elasticsearch.plugin from 2.0.0 if (user == null) { // TODO Update to https if (!Strings.isNullOrEmpty(System.getProperty(PROPERTY_SUPPORT_STAGING_URLS))) { - addUrl(urls, String.format(Locale.ROOT, "http://download.elastic.co/elasticsearch/staging/%s/org/elasticsearch/plugin/elasticsearch-%s/%s/elasticsearch-%s-%s.zip", Build.CURRENT.hashShort(), repo, version, repo, version)); + addUrl(urls, String.format(Locale.ROOT, "http://download.elastic.co/elasticsearch/staging/%s-%s/org/elasticsearch/plugin/%s/%s/%s-%s.zip", version, Build.CURRENT.hashShort(), name, version, name, version)); } - addUrl(urls, String.format(Locale.ROOT, "http://download.elastic.co/elasticsearch/release/org/elasticsearch/plugin/elasticsearch-%s/%s/elasticsearch-%s-%s.zip", repo, version, repo, version)); + addUrl(urls, String.format(Locale.ROOT, "http://download.elastic.co/elasticsearch/release/org/elasticsearch/plugin/%s/%s/%s-%s.zip", name, version, name, version)); } else { // Elasticsearch old download service // TODO Update to https - addUrl(urls, String.format(Locale.ROOT, "http://download.elastic.co/%1$s/%2$s/%2$s-%3$s.zip", user, repo, version)); + addUrl(urls, String.format(Locale.ROOT, "http://download.elastic.co/%1$s/%2$s/%2$s-%3$s.zip", user, name, version)); // Maven central repository - addUrl(urls, String.format(Locale.ROOT, "http://search.maven.org/remotecontent?filepath=%1$s/%2$s/%3$s/%2$s-%3$s.zip", user.replace('.', '/'), repo, version)); + addUrl(urls, String.format(Locale.ROOT, "http://search.maven.org/remotecontent?filepath=%1$s/%2$s/%3$s/%2$s-%3$s.zip", user.replace('.', '/'), name, version)); // Sonatype repository - addUrl(urls, String.format(Locale.ROOT, "https://oss.sonatype.org/service/local/repositories/releases/content/%1$s/%2$s/%3$s/%2$s-%3$s.zip", user.replace('.', '/'), repo, version)); + addUrl(urls, String.format(Locale.ROOT, "https://oss.sonatype.org/service/local/repositories/releases/content/%1$s/%2$s/%3$s/%2$s-%3$s.zip", user.replace('.', '/'), name, version)); // Github repository - addUrl(urls, String.format(Locale.ROOT, "https://github.com/%1$s/%2$s/archive/%3$s.zip", user, repo, version)); + addUrl(urls, String.format(Locale.ROOT, "https://github.com/%1$s/%2$s/archive/%3$s.zip", user, name, version)); } } if (user != null) { // Github repository for master branch (assume site) - addUrl(urls, String.format(Locale.ROOT, "https://github.com/%1$s/%2$s/archive/master.zip", user, repo)); + addUrl(urls, String.format(Locale.ROOT, "https://github.com/%1$s/%2$s/archive/master.zip", user, name)); } return urls; } @@ -525,20 +524,11 @@ public class PluginManager { } } - String endname = repo; - if (repo.startsWith("elasticsearch-")) { - // remove elasticsearch- prefix - endname = repo.substring("elasticsearch-".length()); - } else if (repo.startsWith("es-")) { - // remove es- prefix - endname = repo.substring("es-".length()); - } - if (isOfficialPlugin(repo, user, version)) { - return new PluginHandle(endname, Version.CURRENT.number(), null, repo); + return new PluginHandle(repo, Version.CURRENT.number(), null); } - return new PluginHandle(endname, version, user, repo); + return new PluginHandle(repo, version, user); } static boolean isOfficialPlugin(String repo, String user, String version) { diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/node/info/RestNodesInfoAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/node/info/RestNodesInfoAction.java index a78c90aca63..06c5a224a81 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/node/info/RestNodesInfoAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/node/info/RestNodesInfoAction.java @@ -42,7 +42,7 @@ import static org.elasticsearch.rest.RestRequest.Method.GET; public class RestNodesInfoAction extends BaseRestHandler { private final SettingsFilter settingsFilter; - private final static Set ALLOWED_METRICS = Sets.newHashSet("http", "jvm", "network", "os", "plugins", "process", "settings", "thread_pool", "transport"); + private final static Set ALLOWED_METRICS = Sets.newHashSet("http", "jvm", "os", "plugins", "process", "settings", "thread_pool", "transport"); @Inject public RestNodesInfoAction(Settings settings, RestController controller, Client client, SettingsFilter settingsFilter) { @@ -91,7 +91,6 @@ public class RestNodesInfoAction extends BaseRestHandler { nodesInfoRequest.process(metrics.contains("process")); nodesInfoRequest.jvm(metrics.contains("jvm")); nodesInfoRequest.threadPool(metrics.contains("thread_pool")); - nodesInfoRequest.network(metrics.contains("network")); nodesInfoRequest.transport(metrics.contains("transport")); nodesInfoRequest.http(metrics.contains("http")); nodesInfoRequest.plugins(metrics.contains("plugins")); diff --git a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/node/stats/RestNodesStatsAction.java b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/node/stats/RestNodesStatsAction.java index 3c231de534e..fa146b57f06 100644 --- a/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/node/stats/RestNodesStatsAction.java +++ b/core/src/main/java/org/elasticsearch/rest/action/admin/cluster/node/stats/RestNodesStatsAction.java @@ -69,7 +69,6 @@ public class RestNodesStatsAction extends BaseRestHandler { nodesStatsRequest.os(metrics.contains("os")); nodesStatsRequest.jvm(metrics.contains("jvm")); nodesStatsRequest.threadPool(metrics.contains("thread_pool")); - nodesStatsRequest.network(metrics.contains("network")); nodesStatsRequest.fs(metrics.contains("fs")); nodesStatsRequest.transport(metrics.contains("transport")); nodesStatsRequest.http(metrics.contains("http")); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/InternalTerms.java b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/InternalTerms.java index 4c4b2e8371d..9cc1f603123 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/InternalTerms.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/bucket/terms/InternalTerms.java @@ -24,6 +24,7 @@ import com.google.common.collect.Multimap; import org.elasticsearch.common.io.stream.Streamable; import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.search.aggregations.AggregationExecutionException; import org.elasticsearch.search.aggregations.Aggregations; import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.InternalAggregations; @@ -170,8 +171,25 @@ public abstract class InternalTerms buckets = ArrayListMultimap.create(); long sumDocCountError = 0; long otherDocCount = 0; + InternalTerms referenceTerms = null; for (InternalAggregation aggregation : aggregations) { InternalTerms terms = (InternalTerms) aggregation; + if (referenceTerms == null && !terms.getClass().equals(UnmappedTerms.class)) { + referenceTerms = (InternalTerms) aggregation; + } + if (referenceTerms != null && + !referenceTerms.getClass().equals(terms.getClass()) && + !terms.getClass().equals(UnmappedTerms.class)) { + // control gets into this loop when the same field name against which the query is executed + // is of different types in different indices. + throw new AggregationExecutionException("Merging/Reducing the aggregations failed " + + "when computing the aggregation [ Name: " + + referenceTerms.getName() + ", Type: " + + referenceTerms.type() + " ]" + " because: " + + "the field you gave in the aggregation query " + + "existed as two different types " + + "in two different indices"); + } otherDocCount += terms.getSumOfOtherDocCounts(); final long thisAggDocCountError; if (terms.buckets.size() < this.shardSize || this.order == InternalOrder.TERM_ASC || this.order == InternalOrder.TERM_DESC) { diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregator.java index 6a72718f134..0a3918de46b 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregator.java @@ -56,7 +56,6 @@ import java.util.Map; public class CardinalityAggregator extends NumericMetricsAggregator.SingleValue { private final int precision; - private final boolean rehash; private final ValuesSource valuesSource; // Expensive to initialize, so we only initialize it when we have an actual value source @@ -66,11 +65,10 @@ public class CardinalityAggregator extends NumericMetricsAggregator.SingleValue private Collector collector; private ValueFormatter formatter; - public CardinalityAggregator(String name, ValuesSource valuesSource, boolean rehash, int precision, ValueFormatter formatter, + public CardinalityAggregator(String name, ValuesSource valuesSource, int precision, ValueFormatter formatter, AggregationContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { super(name, context, parent, pipelineAggregators, metaData); this.valuesSource = valuesSource; - this.rehash = rehash; this.precision = precision; this.counts = valuesSource == null ? null : new HyperLogLogPlusPlus(precision, context.bigArrays(), 1); this.formatter = formatter; @@ -85,13 +83,6 @@ public class CardinalityAggregator extends NumericMetricsAggregator.SingleValue if (valuesSource == null) { return new EmptyCollector(); } - // if rehash is false then the value source is either already hashed, or the user explicitly - // requested not to hash the values (perhaps they already hashed the values themselves before indexing the doc) - // so we can just work with the original value source as is - if (!rehash) { - MurmurHash3Values hashValues = MurmurHash3Values.cast(((ValuesSource.Numeric) valuesSource).longValues(ctx)); - return new DirectCollector(counts, hashValues); - } if (valuesSource instanceof ValuesSource.Numeric) { ValuesSource.Numeric source = (ValuesSource.Numeric) valuesSource; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregatorFactory.java index 9320020ca5b..1e660f06f45 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregatorFactory.java @@ -19,7 +19,6 @@ package org.elasticsearch.search.aggregations.metrics.cardinality; -import org.elasticsearch.search.aggregations.AggregationExecutionException; import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; @@ -35,12 +34,10 @@ import java.util.Map; final class CardinalityAggregatorFactory extends ValuesSourceAggregatorFactory { private final long precisionThreshold; - private final boolean rehash; - CardinalityAggregatorFactory(String name, ValuesSourceConfig config, long precisionThreshold, boolean rehash) { + CardinalityAggregatorFactory(String name, ValuesSourceConfig config, long precisionThreshold) { super(name, InternalCardinality.TYPE.name(), config); this.precisionThreshold = precisionThreshold; - this.rehash = rehash; } private int precision(Aggregator parent) { @@ -50,16 +47,13 @@ final class CardinalityAggregatorFactory extends ValuesSourceAggregatorFactory pipelineAggregators, Map metaData) throws IOException { - return new CardinalityAggregator(name, null, true, precision(parent), config.formatter(), context, parent, pipelineAggregators, metaData); + return new CardinalityAggregator(name, null, precision(parent), config.formatter(), context, parent, pipelineAggregators, metaData); } @Override protected Aggregator doCreateInternal(ValuesSource valuesSource, AggregationContext context, Aggregator parent, boolean collectsFromSingleBucket, List pipelineAggregators, Map metaData) throws IOException { - if (!(valuesSource instanceof ValuesSource.Numeric) && !rehash) { - throw new AggregationExecutionException("Turning off rehashing for cardinality aggregation [" + name + "] on non-numeric values in not allowed"); - } - return new CardinalityAggregator(name, valuesSource, rehash, precision(parent), config.formatter(), context, parent, pipelineAggregators, + return new CardinalityAggregator(name, valuesSource, precision(parent), config.formatter(), context, parent, pipelineAggregators, metaData); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityParser.java index afeca77d3a9..68339457fe7 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityParser.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityParser.java @@ -21,11 +21,9 @@ package org.elasticsearch.search.aggregations.metrics.cardinality; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.index.mapper.core.Murmur3FieldMapper; import org.elasticsearch.search.SearchParseException; import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; import org.elasticsearch.search.aggregations.support.ValuesSourceParser; import org.elasticsearch.search.internal.SearchContext; @@ -35,6 +33,7 @@ import java.io.IOException; public class CardinalityParser implements Aggregator.Parser { private static final ParseField PRECISION_THRESHOLD = new ParseField("precision_threshold"); + private static final ParseField REHASH = new ParseField("rehash").withAllDeprecated("no replacement - values will always be rehashed"); @Override public String type() { @@ -44,10 +43,9 @@ public class CardinalityParser implements Aggregator.Parser { @Override public AggregatorFactory parse(String name, XContentParser parser, SearchContext context) throws IOException { - ValuesSourceParser vsParser = ValuesSourceParser.any(name, InternalCardinality.TYPE, context).formattable(false).build(); + ValuesSourceParser> vsParser = ValuesSourceParser.any(name, InternalCardinality.TYPE, context).formattable(false).build(); long precisionThreshold = -1; - Boolean rehash = null; XContentParser.Token token; String currentFieldName = null; @@ -57,8 +55,8 @@ public class CardinalityParser implements Aggregator.Parser { } else if (vsParser.token(currentFieldName, token, parser)) { continue; } else if (token.isValue()) { - if ("rehash".equals(currentFieldName)) { - rehash = parser.booleanValue(); + if (context.parseFieldMatcher().match(currentFieldName, REHASH)) { + // ignore } else if (context.parseFieldMatcher().match(currentFieldName, PRECISION_THRESHOLD)) { precisionThreshold = parser.longValue(); } else { @@ -70,15 +68,7 @@ public class CardinalityParser implements Aggregator.Parser { } } - ValuesSourceConfig> config = vsParser.config(); - - if (rehash == null && config.fieldContext() != null && config.fieldContext().fieldType() instanceof Murmur3FieldMapper.Murmur3FieldType) { - rehash = false; - } else if (rehash == null) { - rehash = true; - } - - return new CardinalityAggregatorFactory(name, config, precisionThreshold, rehash); + return new CardinalityAggregatorFactory(name, vsParser.config(), precisionThreshold); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/support/AggregationContext.java b/core/src/main/java/org/elasticsearch/search/aggregations/support/AggregationContext.java index b7c3f33bf5b..ee917824f5c 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/support/AggregationContext.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/support/AggregationContext.java @@ -156,6 +156,12 @@ public class AggregationContext { } private ValuesSource.Numeric numericField(ValuesSourceConfig> config) throws IOException { + + if (!(config.fieldContext.indexFieldData() instanceof IndexNumericFieldData)) { + throw new IllegalArgumentException("Expected numeric type on field [" + config.fieldContext.field() + + "], but got [" + config.fieldContext.fieldType().typeName() + "]"); + } + ValuesSource.Numeric dataSource = new ValuesSource.Numeric.FieldData((IndexNumericFieldData) config.fieldContext.indexFieldData()); if (config.script != null) { dataSource = new ValuesSource.Numeric.WithScript(dataSource, config.script); @@ -184,6 +190,12 @@ public class AggregationContext { } private ValuesSource.GeoPoint geoPointField(ValuesSourceConfig> config) throws IOException { + + if (!(config.fieldContext.indexFieldData() instanceof IndexGeoPointFieldData)) { + throw new IllegalArgumentException("Expected geo_point type on field [" + config.fieldContext.field() + + "], but got [" + config.fieldContext.fieldType().typeName() + "]"); + } + return new ValuesSource.GeoPoint.Fielddata((IndexGeoPointFieldData) config.fieldContext.indexFieldData()); } diff --git a/core/src/main/java/org/elasticsearch/search/highlight/Highlighters.java b/core/src/main/java/org/elasticsearch/search/highlight/Highlighters.java index 349227f5c57..1e519957aac 100644 --- a/core/src/main/java/org/elasticsearch/search/highlight/Highlighters.java +++ b/core/src/main/java/org/elasticsearch/search/highlight/Highlighters.java @@ -29,7 +29,7 @@ import java.util.*; /** * An extensions point and registry for all the highlighters a node supports. */ -public class Highlighters extends ExtensionPoint.MapExtensionPoint { +public class Highlighters extends ExtensionPoint.ClassMap { @Deprecated // remove in 3.0 private static final String FAST_VECTOR_HIGHLIGHTER = "fast-vector-highlighter"; diff --git a/core/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortBuilder.java b/core/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortBuilder.java index 5a128f44e02..8f502e53058 100644 --- a/core/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortBuilder.java +++ b/core/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortBuilder.java @@ -47,6 +47,8 @@ public class GeoDistanceSortBuilder extends SortBuilder { private String sortMode; private QueryBuilder nestedFilter; private String nestedPath; + private Boolean coerce; + private Boolean ignoreMalformed; /** * Constructs a new distance based sort on a geo point like field. @@ -146,6 +148,16 @@ public class GeoDistanceSortBuilder extends SortBuilder { return this; } + public GeoDistanceSortBuilder coerce(boolean coerce) { + this.coerce = coerce; + return this; + } + + public GeoDistanceSortBuilder ignoreMalformed(boolean ignoreMalformed) { + this.ignoreMalformed = ignoreMalformed; + return this; + } + @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject("_geo_distance"); @@ -181,6 +193,12 @@ public class GeoDistanceSortBuilder extends SortBuilder { if (nestedFilter != null) { builder.field("nested_filter", nestedFilter, params); } + if (coerce != null) { + builder.field("coerce", coerce); + } + if (ignoreMalformed != null) { + builder.field("ignore_malformed", ignoreMalformed); + } builder.endObject(); return builder; diff --git a/core/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortParser.java b/core/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortParser.java index e7941a41d13..6f4a0dfbb4a 100644 --- a/core/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortParser.java +++ b/core/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortParser.java @@ -29,6 +29,7 @@ import org.apache.lucene.search.SortField; import org.apache.lucene.search.join.BitDocIdSetFilter; import org.apache.lucene.util.BitSet; import org.elasticsearch.ElasticsearchParseException; +import org.elasticsearch.Version; import org.elasticsearch.common.geo.GeoDistance; import org.elasticsearch.common.geo.GeoDistance.FixedSourceDistance; import org.elasticsearch.common.geo.GeoPoint; @@ -42,7 +43,6 @@ import org.elasticsearch.index.fielddata.IndexGeoPointFieldData; import org.elasticsearch.index.fielddata.MultiGeoPointValues; import org.elasticsearch.index.fielddata.NumericDoubleValues; import org.elasticsearch.index.fielddata.SortedNumericDoubleValues; -import org.elasticsearch.index.mapper.FieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.object.ObjectMapper; import org.elasticsearch.index.query.support.NestedInnerQueryParseSupport; @@ -73,8 +73,9 @@ public class GeoDistanceSortParser implements SortParser { MultiValueMode sortMode = null; NestedInnerQueryParseSupport nestedHelper = null; - boolean normalizeLon = true; - boolean normalizeLat = true; + final boolean indexCreatedBeforeV2_0 = context.queryParserService().getIndexCreatedVersion().before(Version.V_2_0_0); + boolean coerce = false; + boolean ignoreMalformed = false; XContentParser.Token token; String currentName = parser.currentName(); @@ -107,9 +108,13 @@ public class GeoDistanceSortParser implements SortParser { unit = DistanceUnit.fromString(parser.text()); } else if (currentName.equals("distance_type") || currentName.equals("distanceType")) { geoDistance = GeoDistance.fromString(parser.text()); - } else if ("normalize".equals(currentName)) { - normalizeLat = parser.booleanValue(); - normalizeLon = parser.booleanValue(); + } else if ("coerce".equals(currentName) || (indexCreatedBeforeV2_0 && "normalize".equals(currentName))) { + coerce = parser.booleanValue(); + if (coerce == true) { + ignoreMalformed = true; + } + } else if ("ignore_malformed".equals(currentName) && coerce == false) { + ignoreMalformed = parser.booleanValue(); } else if ("sort_mode".equals(currentName) || "sortMode".equals(currentName) || "mode".equals(currentName)) { sortMode = MultiValueMode.fromString(parser.text()); } else if ("nested_path".equals(currentName) || "nestedPath".equals(currentName)) { @@ -126,9 +131,21 @@ public class GeoDistanceSortParser implements SortParser { } } - if (normalizeLat || normalizeLon) { + // validation was not available prior to 2.x, so to support bwc percolation queries we only ignore_malformed on 2.x created indexes + if (!indexCreatedBeforeV2_0 && !ignoreMalformed) { for (GeoPoint point : geoPoints) { - GeoUtils.normalizePoint(point, normalizeLat, normalizeLon); + if (point.lat() > 90.0 || point.lat() < -90.0) { + throw new ElasticsearchParseException("illegal latitude value [{}] for [GeoDistanceSort]", point.lat()); + } + if (point.lon() > 180.0 || point.lon() < -180) { + throw new ElasticsearchParseException("illegal longitude value [{}] for [GeoDistanceSort]", point.lon()); + } + } + } + + if (coerce) { + for (GeoPoint point : geoPoints) { + GeoUtils.normalizePoint(point, coerce, coerce); } } diff --git a/core/src/main/java/org/elasticsearch/search/suggest/Suggesters.java b/core/src/main/java/org/elasticsearch/search/suggest/Suggesters.java index 1be80b57502..25703e80b6a 100644 --- a/core/src/main/java/org/elasticsearch/search/suggest/Suggesters.java +++ b/core/src/main/java/org/elasticsearch/search/suggest/Suggesters.java @@ -18,7 +18,6 @@ */ package org.elasticsearch.search.suggest; -import org.elasticsearch.common.inject.Binder; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.util.ExtensionPoint; import org.elasticsearch.script.ScriptService; @@ -31,7 +30,7 @@ import java.util.*; /** * */ -public final class Suggesters extends ExtensionPoint.MapExtensionPoint { +public final class Suggesters extends ExtensionPoint.ClassMap { private final Map parsers; public Suggesters() { diff --git a/core/src/main/java/org/elasticsearch/transport/netty/NettyTransport.java b/core/src/main/java/org/elasticsearch/transport/netty/NettyTransport.java index 0b85bb63211..8a43e0581bd 100644 --- a/core/src/main/java/org/elasticsearch/transport/netty/NettyTransport.java +++ b/core/src/main/java/org/elasticsearch/transport/netty/NettyTransport.java @@ -146,8 +146,8 @@ public class NettyTransport extends AbstractLifecycleComponent implem // node id to actual channel protected final ConcurrentMap connectedNodes = newConcurrentMap(); protected final Map serverBootstraps = newConcurrentMap(); - protected final Map serverChannels = newConcurrentMap(); - protected final Map profileBoundAddresses = newConcurrentMap(); + protected final Map > serverChannels = newConcurrentMap(); + protected final ConcurrentMap profileBoundAddresses = newConcurrentMap(); protected volatile TransportServiceAdapter transportServiceAdapter; protected volatile BoundTransportAddress boundAddress; protected final KeyedLock connectionLock = new KeyedLock<>(); @@ -286,7 +286,7 @@ public class NettyTransport extends AbstractLifecycleComponent implem bindServerBootstrap(name, mergedSettings); } - InetSocketAddress boundAddress = (InetSocketAddress) serverChannels.get(DEFAULT_PROFILE).getLocalAddress(); + InetSocketAddress boundAddress = (InetSocketAddress) serverChannels.get(DEFAULT_PROFILE).get(0).getLocalAddress(); int publishPort = settings.getAsInt("transport.netty.publish_port", settings.getAsInt("transport.publish_port", boundAddress.getPort())); String publishHost = settings.get("transport.netty.publish_host", settings.get("transport.publish_host", settings.get("transport.host"))); InetSocketAddress publishAddress = createPublishAddress(publishHost, publishPort); @@ -397,23 +397,38 @@ public class NettyTransport extends AbstractLifecycleComponent implem private void bindServerBootstrap(final String name, final Settings settings) { // Bind and start to accept incoming connections. - InetAddress hostAddressX; + InetAddress hostAddresses[]; String bindHost = settings.get("bind_host"); try { - hostAddressX = networkService.resolveBindHostAddress(bindHost); + hostAddresses = networkService.resolveBindHostAddress(bindHost); } catch (IOException e) { throw new BindTransportException("Failed to resolve host [" + bindHost + "]", e); } - final InetAddress hostAddress = hostAddressX; + for (InetAddress hostAddress : hostAddresses) { + bindServerBootstrap(name, hostAddress, settings); + } + } + + private void bindServerBootstrap(final String name, final InetAddress hostAddress, Settings settings) { String port = settings.get("port"); PortsRange portsRange = new PortsRange(port); final AtomicReference lastException = new AtomicReference<>(); + final AtomicReference boundSocket = new AtomicReference<>(); boolean success = portsRange.iterate(new PortsRange.PortCallback() { @Override public boolean onPortNumber(int portNumber) { try { - serverChannels.put(name, serverBootstraps.get(name).bind(new InetSocketAddress(hostAddress, portNumber))); + Channel channel = serverBootstraps.get(name).bind(new InetSocketAddress(hostAddress, portNumber)); + synchronized (serverChannels) { + List list = serverChannels.get(name); + if (list == null) { + list = new ArrayList<>(); + serverChannels.put(name, list); + } + list.add(channel); + boundSocket.set(channel.getLocalAddress()); + } } catch (Exception e) { lastException.set(e); return false; @@ -426,14 +441,15 @@ public class NettyTransport extends AbstractLifecycleComponent implem } if (!DEFAULT_PROFILE.equals(name)) { - InetSocketAddress boundAddress = (InetSocketAddress) serverChannels.get(name).getLocalAddress(); + InetSocketAddress boundAddress = (InetSocketAddress) boundSocket.get(); int publishPort = settings.getAsInt("publish_port", boundAddress.getPort()); String publishHost = settings.get("publish_host", boundAddress.getHostString()); InetSocketAddress publishAddress = createPublishAddress(publishHost, publishPort); - profileBoundAddresses.put(name, new BoundTransportAddress(new InetSocketTransportAddress(boundAddress), new InetSocketTransportAddress(publishAddress))); + // TODO: support real multihoming with publishing. Today we use putIfAbsent so only the prioritized address is published + profileBoundAddresses.putIfAbsent(name, new BoundTransportAddress(new InetSocketTransportAddress(boundAddress), new InetSocketTransportAddress(publishAddress))); } - logger.debug("Bound profile [{}] to address [{}]", name, serverChannels.get(name).getLocalAddress()); + logger.info("Bound profile [{}] to address [{}]", name, boundSocket.get()); } private void createServerBootstrap(String name, Settings settings) { @@ -500,15 +516,17 @@ public class NettyTransport extends AbstractLifecycleComponent implem nodeChannels.close(); } - Iterator > serverChannelIterator = serverChannels.entrySet().iterator(); + Iterator >> serverChannelIterator = serverChannels.entrySet().iterator(); while (serverChannelIterator.hasNext()) { - Map.Entry serverChannelEntry = serverChannelIterator.next(); + Map.Entry > serverChannelEntry = serverChannelIterator.next(); String name = serverChannelEntry.getKey(); - Channel serverChannel = serverChannelEntry.getValue(); - try { - serverChannel.close().awaitUninterruptibly(); - } catch (Throwable t) { - logger.debug("Error closing serverChannel for profile [{}]", t, name); + List serverChannels = serverChannelEntry.getValue(); + for (Channel serverChannel : serverChannels) { + try { + serverChannel.close().awaitUninterruptibly(); + } catch (Throwable t) { + logger.debug("Error closing serverChannel for profile [{}]", t, name); + } } serverChannelIterator.remove(); } diff --git a/core/src/main/resources/org/elasticsearch/bootstrap/elasticsearch.help b/core/src/main/resources/org/elasticsearch/bootstrap/elasticsearch.help index 8d6834dae26..83ee497dc21 100644 --- a/core/src/main/resources/org/elasticsearch/bootstrap/elasticsearch.help +++ b/core/src/main/resources/org/elasticsearch/bootstrap/elasticsearch.help @@ -8,7 +8,7 @@ SYNOPSIS DESCRIPTION - Start elasticsearch and manage plugins + Start an elasticsearch node COMMANDS diff --git a/core/src/main/resources/org/elasticsearch/plugins/plugin-install.help b/core/src/main/resources/org/elasticsearch/plugins/plugin-install.help index 26fbf6a2313..9aa943da46c 100644 --- a/core/src/main/resources/org/elasticsearch/plugins/plugin-install.help +++ b/core/src/main/resources/org/elasticsearch/plugins/plugin-install.help @@ -22,7 +22,7 @@ DESCRIPTION EXAMPLES - plugin install elasticsearch-analysis-kuromoji + plugin install analysis-kuromoji plugin install elasticsearch/shield/latest @@ -32,23 +32,24 @@ OFFICIAL PLUGINS The following plugins are officially supported and can be installed by just referring to their name - - elasticsearch-analysis-icu - - elasticsearch-analysis-kuromoji - - elasticsearch-analysis-phonetic - - elasticsearch-analysis-smartcn - - elasticsearch-analysis-stempel - - elasticsearch-cloud-aws - - elasticsearch-cloud-azure - - elasticsearch-cloud-gce - - elasticsearch-delete-by-query - - elasticsearch-lang-javascript - - elasticsearch-lang-python - - elasticsearch-mapper-size + - analysis-icu + - analysis-kuromoji + - analysis-phonetic + - analysis-smartcn + - analysis-stempel + - cloud-aws + - cloud-azure + - cloud-gce + - delete-by-query + - lang-javascript + - lang-python + - mapper-murmur3 + - mapper-size OPTIONS - -u,--url URL to retrive the plugin from + -u,--url URL to retrieve the plugin from -t,--timeout Timeout until the plugin download is abort diff --git a/core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/MockDiskUsagesIT.java b/core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/MockDiskUsagesIT.java index 79612d07b0e..b0ae30a06d4 100644 --- a/core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/MockDiskUsagesIT.java +++ b/core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/MockDiskUsagesIT.java @@ -27,6 +27,7 @@ import org.elasticsearch.cluster.*; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.routing.RoutingNode; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.transport.DummyTransportAddress; import org.elasticsearch.monitor.fs.FsInfo; import org.elasticsearch.test.ESIntegTestCase; import org.junit.Test; @@ -167,7 +168,7 @@ public class MockDiskUsagesIT extends ESIntegTestCase { usage.getTotalBytes(), usage.getFreeBytes(), usage.getFreeBytes()); paths[0] = path; FsInfo fsInfo = new FsInfo(System.currentTimeMillis(), paths); - return new NodeStats(new DiscoveryNode(nodeName, null, Version.V_2_0_0_beta1), + return new NodeStats(new DiscoveryNode(nodeName, DummyTransportAddress.INSTANCE, Version.CURRENT), System.currentTimeMillis(), null, null, null, null, null, fsInfo, diff --git a/core/src/test/java/org/elasticsearch/common/inject/ModuleTestCase.java b/core/src/test/java/org/elasticsearch/common/inject/ModuleTestCase.java index 323b9f5ca4a..2b1330abe63 100644 --- a/core/src/test/java/org/elasticsearch/common/inject/ModuleTestCase.java +++ b/core/src/test/java/org/elasticsearch/common/inject/ModuleTestCase.java @@ -24,12 +24,15 @@ import org.elasticsearch.common.inject.spi.Elements; import org.elasticsearch.common.inject.spi.InstanceBinding; import org.elasticsearch.common.inject.spi.LinkedKeyBinding; import org.elasticsearch.common.inject.spi.ProviderInstanceBinding; +import org.elasticsearch.common.inject.spi.ProviderLookup; import org.elasticsearch.test.ESTestCase; import java.lang.annotation.Annotation; import java.lang.reflect.Type; +import java.util.HashMap; import java.util.HashSet; import java.util.List; +import java.util.Map; import java.util.Set; /** @@ -77,7 +80,7 @@ public abstract class ModuleTestCase extends ESTestCase { /** * Configures the module and checks a Map of the "to" class - * is bound to "theClas". + * is bound to "theClass". */ public void assertMapMultiBinding(Module module, Class to, Class theClass) { List elements = Elements.getElements(module); @@ -138,10 +141,18 @@ public abstract class ModuleTestCase extends ESTestCase { assertTrue("Did not find provider for set of " + to.getName(), providerFound); } + /** + * Configures the module, and ensures an instance is bound to the "to" class, and the + * provided tester returns true on the instance. + */ public void assertInstanceBinding(Module module, Class to, Predicate tester) { assertInstanceBindingWithAnnotation(module, to, tester, null); } + /** + * Like {@link #assertInstanceBinding(Module, Class, Predicate)}, but filters the + * classes checked by the given annotation. + */ public void assertInstanceBindingWithAnnotation(Module module, Class to, Predicate tester, Class extends Annotation> annotation) { List elements = Elements.getElements(module); for (Element element : elements) { @@ -161,4 +172,39 @@ public abstract class ModuleTestCase extends ESTestCase { } fail("Did not find any instance binding to " + to.getName() + ". Found these bindings:\n" + s); } + + /** + * Configures the module, and ensures a map exists between the "keyType" and "valueType", + * and that all of the "expected" values are bound. + */ + @SuppressWarnings("unchecked") + public void assertMapInstanceBinding(Module module, Class keyType, Class valueType, Map expected) throws Exception { + // this method is insane because java type erasure makes it incredibly difficult... + Map keys = new HashMap<>(); + Map values = new HashMap<>(); + List elements = Elements.getElements(module); + for (Element element : elements) { + if (element instanceof InstanceBinding) { + InstanceBinding binding = (InstanceBinding) element; + if (binding.getKey().getRawType().equals(valueType)) { + values.put(binding.getKey(), (V)binding.getInstance()); + } else if (binding.getInstance() instanceof Map.Entry) { + Map.Entry entry = (Map.Entry)binding.getInstance(); + Object key = entry.getKey(); + Object providerValue = entry.getValue(); + if (key.getClass().equals(keyType) && providerValue instanceof ProviderLookup.ProviderImpl) { + ProviderLookup.ProviderImpl provider = (ProviderLookup.ProviderImpl)providerValue; + keys.put((K)key, provider.getKey()); + } + } + } + } + for (Map.Entry entry : expected.entrySet()) { + Key valueKey = keys.get(entry.getKey()); + assertNotNull("Could not find binding for key [" + entry.getKey() + "], found these keys:\n" + keys.keySet(), valueKey); + V value = values.get(valueKey); + assertNotNull("Could not find value for instance key [" + valueKey + "], found these bindings:\n" + elements); + assertEquals(entry.getValue(), value); + } + } } diff --git a/core/src/test/java/org/elasticsearch/common/network/NetworkUtilsTests.java b/core/src/test/java/org/elasticsearch/common/network/NetworkUtilsTests.java new file mode 100644 index 00000000000..e5b95f258a3 --- /dev/null +++ b/core/src/test/java/org/elasticsearch/common/network/NetworkUtilsTests.java @@ -0,0 +1,77 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.common.network; + +import org.elasticsearch.test.ESTestCase; + +import java.net.InetAddress; + +/** + * Tests for network utils. Please avoid using any methods that cause DNS lookups! + */ +public class NetworkUtilsTests extends ESTestCase { + + /** + * test sort key order respects PREFER_IPV4 + */ + public void testSortKey() throws Exception { + InetAddress localhostv4 = InetAddress.getByName("127.0.0.1"); + InetAddress localhostv6 = InetAddress.getByName("::1"); + assertTrue(NetworkUtils.sortKey(localhostv4, false) < NetworkUtils.sortKey(localhostv6, false)); + assertTrue(NetworkUtils.sortKey(localhostv6, true) < NetworkUtils.sortKey(localhostv4, true)); + } + + /** + * test ordinary addresses sort before private addresses + */ + public void testSortKeySiteLocal() throws Exception { + InetAddress siteLocal = InetAddress.getByName("172.16.0.1"); + assert siteLocal.isSiteLocalAddress(); + InetAddress ordinary = InetAddress.getByName("192.192.192.192"); + assertTrue(NetworkUtils.sortKey(ordinary, true) < NetworkUtils.sortKey(siteLocal, true)); + assertTrue(NetworkUtils.sortKey(ordinary, false) < NetworkUtils.sortKey(siteLocal, false)); + + InetAddress siteLocal6 = InetAddress.getByName("fec0::1"); + assert siteLocal6.isSiteLocalAddress(); + InetAddress ordinary6 = InetAddress.getByName("fddd::1"); + assertTrue(NetworkUtils.sortKey(ordinary6, true) < NetworkUtils.sortKey(siteLocal6, true)); + assertTrue(NetworkUtils.sortKey(ordinary6, false) < NetworkUtils.sortKey(siteLocal6, false)); + } + + /** + * test private addresses sort before link local addresses + */ + public void testSortKeyLinkLocal() throws Exception { + InetAddress linkLocal = InetAddress.getByName("fe80::1"); + assert linkLocal.isLinkLocalAddress(); + InetAddress ordinary = InetAddress.getByName("fddd::1"); + assertTrue(NetworkUtils.sortKey(ordinary, true) < NetworkUtils.sortKey(linkLocal, true)); + assertTrue(NetworkUtils.sortKey(ordinary, false) < NetworkUtils.sortKey(linkLocal, false)); + } + + /** + * Test filtering out ipv4/ipv6 addresses + */ + public void testFilter() throws Exception { + InetAddress addresses[] = { InetAddress.getByName("::1"), InetAddress.getByName("127.0.0.1") }; + assertArrayEquals(new InetAddress[] { InetAddress.getByName("127.0.0.1") }, NetworkUtils.filterIPV4(addresses)); + assertArrayEquals(new InetAddress[] { InetAddress.getByName("::1") }, NetworkUtils.filterIPV6(addresses)); + } +} diff --git a/core/src/test/java/org/elasticsearch/common/util/concurrent/EsExecutorsTests.java b/core/src/test/java/org/elasticsearch/common/util/concurrent/EsExecutorsTests.java index c7406aa9511..b59c8dd1cb6 100644 --- a/core/src/test/java/org/elasticsearch/common/util/concurrent/EsExecutorsTests.java +++ b/core/src/test/java/org/elasticsearch/common/util/concurrent/EsExecutorsTests.java @@ -21,13 +21,14 @@ package org.elasticsearch.common.util.concurrent; import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.test.ESTestCase; -import org.junit.Test; +import org.hamcrest.Matcher; import java.util.concurrent.CountDownLatch; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; +import static org.hamcrest.Matchers.anyOf; import static org.hamcrest.Matchers.containsString; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.lessThan; @@ -275,7 +276,19 @@ public class EsExecutorsTests extends ESTestCase { assertThat(message, containsString("on EsThreadPoolExecutor[testRejectionMessage")); assertThat(message, containsString("queue capacity = " + queue)); assertThat(message, containsString("[Running")); - assertThat(message, containsString("active threads = " + pool)); + /* + * While you'd expect all threads in the pool to be active when the queue gets long enough to cause rejections this isn't + * always the case. Sometimes you'll see "active threads = ", presumably because one of those threads has finished + * its current task but has yet to pick up another task. You too can reproduce this by adding the @Repeat annotation to this + * test with something like 10000 iterations. I suspect you could see "active threads = ". So + * that is what we assert. + */ + @SuppressWarnings("unchecked") + Matcher [] activeThreads = new Matcher[pool + 1]; + for (int p = 0; p <= pool; p++) { + activeThreads[p] = containsString("active threads = " + p); + } + assertThat(message, anyOf(activeThreads)); assertThat(message, containsString("queued tasks = " + queue)); assertThat(message, containsString("completed tasks = 0")); } diff --git a/core/src/test/java/org/elasticsearch/get/GetActionIT.java b/core/src/test/java/org/elasticsearch/get/GetActionIT.java index b376590f68b..743304df941 100644 --- a/core/src/test/java/org/elasticsearch/get/GetActionIT.java +++ b/core/src/test/java/org/elasticsearch/get/GetActionIT.java @@ -1116,7 +1116,7 @@ public class GetActionIT extends ESIntegTestCase { @Test public void testGeneratedNumberFieldsUnstored() throws IOException { indexSingleDocumentWithNumericFieldsGeneratedFromText(false, randomBoolean()); - String[] fieldsList = {"token_count", "text.token_count", "murmur", "text.murmur"}; + String[] fieldsList = {"token_count", "text.token_count"}; // before refresh - document is only in translog assertGetFieldsAlwaysNull(indexOrAlias(), "doc", "1", fieldsList); refresh(); @@ -1130,7 +1130,7 @@ public class GetActionIT extends ESIntegTestCase { @Test public void testGeneratedNumberFieldsStored() throws IOException { indexSingleDocumentWithNumericFieldsGeneratedFromText(true, randomBoolean()); - String[] fieldsList = {"token_count", "text.token_count", "murmur", "text.murmur"}; + String[] fieldsList = {"token_count", "text.token_count"}; // before refresh - document is only in translog assertGetFieldsNull(indexOrAlias(), "doc", "1", fieldsList); assertGetFieldsException(indexOrAlias(), "doc", "1", fieldsList); @@ -1159,10 +1159,6 @@ public class GetActionIT extends ESIntegTestCase { " \"analyzer\": \"standard\",\n" + " \"store\": \"" + storedString + "\"" + " },\n" + - " \"murmur\": {\n" + - " \"type\": \"murmur3\",\n" + - " \"store\": \"" + storedString + "\"" + - " },\n" + " \"text\": {\n" + " \"type\": \"string\",\n" + " \"fields\": {\n" + @@ -1170,10 +1166,6 @@ public class GetActionIT extends ESIntegTestCase { " \"type\": \"token_count\",\n" + " \"analyzer\": \"standard\",\n" + " \"store\": \"" + storedString + "\"" + - " },\n" + - " \"murmur\": {\n" + - " \"type\": \"murmur3\",\n" + - " \"store\": \"" + storedString + "\"" + " }\n" + " }\n" + " }" + @@ -1185,7 +1177,6 @@ public class GetActionIT extends ESIntegTestCase { assertAcked(prepareCreate("test").addAlias(new Alias("alias")).setSource(createIndexSource)); ensureGreen(); String doc = "{\n" + - " \"murmur\": \"Some value that can be hashed\",\n" + " \"token_count\": \"A text with five words.\",\n" + " \"text\": \"A text with five words.\"\n" + "}\n"; diff --git a/core/src/test/java/org/elasticsearch/index/analysis/AnalysisModuleTests.java b/core/src/test/java/org/elasticsearch/index/analysis/AnalysisModuleTests.java index 8a81705684d..4a09eeed6b6 100644 --- a/core/src/test/java/org/elasticsearch/index/analysis/AnalysisModuleTests.java +++ b/core/src/test/java/org/elasticsearch/index/analysis/AnalysisModuleTests.java @@ -40,7 +40,6 @@ import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexNameModule; import org.elasticsearch.index.analysis.filter1.MyFilterTokenFilterFactory; import org.elasticsearch.index.settings.IndexSettingsModule; -import org.elasticsearch.indices.analysis.IndicesAnalysisModule; import org.elasticsearch.indices.analysis.IndicesAnalysisService; import org.elasticsearch.test.ESTestCase; import org.hamcrest.MatcherAssert; @@ -55,7 +54,9 @@ import java.nio.file.Path; import java.util.Set; import static org.elasticsearch.common.settings.Settings.settingsBuilder; -import static org.hamcrest.Matchers.*; +import static org.hamcrest.Matchers.equalTo; +import static org.hamcrest.Matchers.instanceOf; +import static org.hamcrest.Matchers.is; /** * @@ -66,7 +67,7 @@ public class AnalysisModuleTests extends ESTestCase { public AnalysisService getAnalysisService(Settings settings) { Index index = new Index("test"); - Injector parentInjector = new ModulesBuilder().add(new SettingsModule(settings), new EnvironmentModule(new Environment(settings)), new IndicesAnalysisModule()).createInjector(); + Injector parentInjector = new ModulesBuilder().add(new SettingsModule(settings), new EnvironmentModule(new Environment(settings))).createInjector(); AnalysisModule analysisModule = new AnalysisModule(settings, parentInjector.getInstance(IndicesAnalysisService.class)); analysisModule.addTokenFilter("myfilter", MyFilterTokenFilterFactory.class); injector = new ModulesBuilder().add( diff --git a/core/src/test/java/org/elasticsearch/index/analysis/AnalysisTestsHelper.java b/core/src/test/java/org/elasticsearch/index/analysis/AnalysisTestsHelper.java index 74ff95d4a14..72eac6860ba 100644 --- a/core/src/test/java/org/elasticsearch/index/analysis/AnalysisTestsHelper.java +++ b/core/src/test/java/org/elasticsearch/index/analysis/AnalysisTestsHelper.java @@ -30,7 +30,7 @@ import org.elasticsearch.env.EnvironmentModule; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexNameModule; import org.elasticsearch.index.settings.IndexSettingsModule; -import org.elasticsearch.indices.analysis.IndicesAnalysisModule; +import org.elasticsearch.indices.IndicesModule; import org.elasticsearch.indices.analysis.IndicesAnalysisService; import java.nio.file.Path; @@ -52,8 +52,15 @@ public class AnalysisTestsHelper { if (settings.get(IndexMetaData.SETTING_VERSION_CREATED) == null) { settings = Settings.builder().put(settings).put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT).build(); } + IndicesModule indicesModule = new IndicesModule(settings) { + @Override + public void configure() { + // skip services + bindHunspellExtension(); + } + }; Injector parentInjector = new ModulesBuilder().add(new SettingsModule(settings), - new EnvironmentModule(new Environment(settings)), new IndicesAnalysisModule()).createInjector(); + new EnvironmentModule(new Environment(settings)), indicesModule).createInjector(); AnalysisModule analysisModule = new AnalysisModule(settings, parentInjector.getInstance(IndicesAnalysisService.class)); diff --git a/core/src/test/java/org/elasticsearch/index/analysis/CharFilterTests.java b/core/src/test/java/org/elasticsearch/index/analysis/CharFilterTests.java index ef2c6d4e7fb..0171b4cc695 100644 --- a/core/src/test/java/org/elasticsearch/index/analysis/CharFilterTests.java +++ b/core/src/test/java/org/elasticsearch/index/analysis/CharFilterTests.java @@ -29,7 +29,6 @@ import org.elasticsearch.env.EnvironmentModule; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexNameModule; import org.elasticsearch.index.settings.IndexSettingsModule; -import org.elasticsearch.indices.analysis.IndicesAnalysisModule; import org.elasticsearch.indices.analysis.IndicesAnalysisService; import org.elasticsearch.test.ESTokenStreamTestCase; import org.junit.Test; @@ -51,7 +50,7 @@ public class CharFilterTests extends ESTokenStreamTestCase { .putArray("index.analysis.analyzer.custom_with_char_filter.char_filter", "my_mapping") .put("path.home", createTempDir().toString()) .build(); - Injector parentInjector = new ModulesBuilder().add(new SettingsModule(settings), new EnvironmentModule(new Environment(settings)), new IndicesAnalysisModule()).createInjector(); + Injector parentInjector = new ModulesBuilder().add(new SettingsModule(settings), new EnvironmentModule(new Environment(settings))).createInjector(); Injector injector = new ModulesBuilder().add( new IndexSettingsModule(index, settings), new IndexNameModule(index), @@ -77,7 +76,7 @@ public class CharFilterTests extends ESTokenStreamTestCase { .putArray("index.analysis.analyzer.custom_with_char_filter.char_filter", "html_strip") .put("path.home", createTempDir().toString()) .build(); - Injector parentInjector = new ModulesBuilder().add(new SettingsModule(settings), new EnvironmentModule(new Environment(settings)), new IndicesAnalysisModule()).createInjector(); + Injector parentInjector = new ModulesBuilder().add(new SettingsModule(settings), new EnvironmentModule(new Environment(settings))).createInjector(); Injector injector = new ModulesBuilder().add( new IndexSettingsModule(index, settings), new IndexNameModule(index), diff --git a/core/src/test/java/org/elasticsearch/index/analysis/CompoundAnalysisTests.java b/core/src/test/java/org/elasticsearch/index/analysis/CompoundAnalysisTests.java index ad81450c336..28b30e9ff5d 100644 --- a/core/src/test/java/org/elasticsearch/index/analysis/CompoundAnalysisTests.java +++ b/core/src/test/java/org/elasticsearch/index/analysis/CompoundAnalysisTests.java @@ -37,7 +37,6 @@ import org.elasticsearch.index.IndexNameModule; import org.elasticsearch.index.analysis.compound.DictionaryCompoundWordTokenFilterFactory; import org.elasticsearch.index.analysis.filter1.MyFilterTokenFilterFactory; import org.elasticsearch.index.settings.IndexSettingsModule; -import org.elasticsearch.indices.analysis.IndicesAnalysisModule; import org.elasticsearch.indices.analysis.IndicesAnalysisService; import org.elasticsearch.test.ESTestCase; import org.hamcrest.MatcherAssert; @@ -48,7 +47,9 @@ import java.util.ArrayList; import java.util.List; import static org.elasticsearch.common.settings.Settings.settingsBuilder; -import static org.hamcrest.Matchers.*; +import static org.hamcrest.Matchers.equalTo; +import static org.hamcrest.Matchers.hasItems; +import static org.hamcrest.Matchers.instanceOf; /** */ @@ -58,7 +59,7 @@ public class CompoundAnalysisTests extends ESTestCase { public void testDefaultsCompoundAnalysis() throws Exception { Index index = new Index("test"); Settings settings = getJsonSettings(); - Injector parentInjector = new ModulesBuilder().add(new SettingsModule(settings), new EnvironmentModule(new Environment(settings)), new IndicesAnalysisModule()).createInjector(); + Injector parentInjector = new ModulesBuilder().add(new SettingsModule(settings), new EnvironmentModule(new Environment(settings))).createInjector(); AnalysisModule analysisModule = new AnalysisModule(settings, parentInjector.getInstance(IndicesAnalysisService.class)); analysisModule.addTokenFilter("myfilter", MyFilterTokenFilterFactory.class); Injector injector = new ModulesBuilder().add( @@ -85,7 +86,7 @@ public class CompoundAnalysisTests extends ESTestCase { private List analyze(Settings settings, String analyzerName, String text) throws IOException { Index index = new Index("test"); - Injector parentInjector = new ModulesBuilder().add(new SettingsModule(settings), new EnvironmentModule(new Environment(settings)), new IndicesAnalysisModule()).createInjector(); + Injector parentInjector = new ModulesBuilder().add(new SettingsModule(settings), new EnvironmentModule(new Environment(settings))).createInjector(); AnalysisModule analysisModule = new AnalysisModule(settings, parentInjector.getInstance(IndicesAnalysisService.class)); analysisModule.addTokenFilter("myfilter", MyFilterTokenFilterFactory.class); Injector injector = new ModulesBuilder().add( diff --git a/core/src/test/java/org/elasticsearch/index/analysis/PatternCaptureTokenFilterTests.java b/core/src/test/java/org/elasticsearch/index/analysis/PatternCaptureTokenFilterTests.java index 2796367f07f..6a7275b73e1 100644 --- a/core/src/test/java/org/elasticsearch/index/analysis/PatternCaptureTokenFilterTests.java +++ b/core/src/test/java/org/elasticsearch/index/analysis/PatternCaptureTokenFilterTests.java @@ -30,7 +30,6 @@ import org.elasticsearch.env.EnvironmentModule; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexNameModule; import org.elasticsearch.index.settings.IndexSettingsModule; -import org.elasticsearch.indices.analysis.IndicesAnalysisModule; import org.elasticsearch.indices.analysis.IndicesAnalysisService; import org.elasticsearch.test.ESTokenStreamTestCase; import org.junit.Test; @@ -48,7 +47,7 @@ public class PatternCaptureTokenFilterTests extends ESTokenStreamTestCase { .loadFromStream(json, getClass().getResourceAsStream(json)) .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) .build(); - Injector parentInjector = new ModulesBuilder().add(new SettingsModule(settings), new EnvironmentModule(new Environment(settings)), new IndicesAnalysisModule()).createInjector(); + Injector parentInjector = new ModulesBuilder().add(new SettingsModule(settings), new EnvironmentModule(new Environment(settings))).createInjector(); Injector injector = new ModulesBuilder().add( new IndexSettingsModule(index, settings), new IndexNameModule(index), diff --git a/core/src/test/java/org/elasticsearch/index/analysis/StopAnalyzerTests.java b/core/src/test/java/org/elasticsearch/index/analysis/StopAnalyzerTests.java index 5ec0178cea0..9265587929c 100644 --- a/core/src/test/java/org/elasticsearch/index/analysis/StopAnalyzerTests.java +++ b/core/src/test/java/org/elasticsearch/index/analysis/StopAnalyzerTests.java @@ -30,7 +30,6 @@ import org.elasticsearch.env.EnvironmentModule; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexNameModule; import org.elasticsearch.index.settings.IndexSettingsModule; -import org.elasticsearch.indices.analysis.IndicesAnalysisModule; import org.elasticsearch.indices.analysis.IndicesAnalysisService; import org.elasticsearch.test.ESTokenStreamTestCase; import org.junit.Test; @@ -48,7 +47,7 @@ public class StopAnalyzerTests extends ESTokenStreamTestCase { .put("path.home", createTempDir().toString()) .put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT) .build(); - Injector parentInjector = new ModulesBuilder().add(new SettingsModule(settings), new EnvironmentModule(new Environment(settings)), new IndicesAnalysisModule()).createInjector(); + Injector parentInjector = new ModulesBuilder().add(new SettingsModule(settings), new EnvironmentModule(new Environment(settings))).createInjector(); Injector injector = new ModulesBuilder().add( new IndexSettingsModule(index, settings), new IndexNameModule(index), diff --git a/core/src/test/java/org/elasticsearch/index/analysis/synonyms/SynonymsAnalysisTest.java b/core/src/test/java/org/elasticsearch/index/analysis/synonyms/SynonymsAnalysisTest.java index e38ab9a0024..371a092ff6e 100644 --- a/core/src/test/java/org/elasticsearch/index/analysis/synonyms/SynonymsAnalysisTest.java +++ b/core/src/test/java/org/elasticsearch/index/analysis/synonyms/SynonymsAnalysisTest.java @@ -39,7 +39,6 @@ import org.elasticsearch.index.IndexNameModule; import org.elasticsearch.index.analysis.AnalysisModule; import org.elasticsearch.index.analysis.AnalysisService; import org.elasticsearch.index.settings.IndexSettingsModule; -import org.elasticsearch.indices.analysis.IndicesAnalysisModule; import org.elasticsearch.indices.analysis.IndicesAnalysisService; import org.elasticsearch.test.ESTestCase; import org.hamcrest.MatcherAssert; @@ -80,8 +79,7 @@ public class SynonymsAnalysisTest extends ESTestCase { Injector parentInjector = new ModulesBuilder().add( new SettingsModule(settings), - new EnvironmentModule(new Environment(settings)), - new IndicesAnalysisModule()) + new EnvironmentModule(new Environment(settings))) .createInjector(); Injector injector = new ModulesBuilder().add( new IndexSettingsModule(index, settings), diff --git a/core/src/test/java/org/elasticsearch/index/cache/IndexCacheModuleTests.java b/core/src/test/java/org/elasticsearch/index/cache/IndexCacheModuleTests.java new file mode 100644 index 00000000000..bd564744f20 --- /dev/null +++ b/core/src/test/java/org/elasticsearch/index/cache/IndexCacheModuleTests.java @@ -0,0 +1,89 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.cache; + +import org.apache.lucene.search.QueryCachingPolicy; +import org.apache.lucene.search.Weight; +import org.elasticsearch.common.inject.ModuleTestCase; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.Index; +import org.elasticsearch.index.cache.query.QueryCache; +import org.elasticsearch.index.cache.query.index.IndexQueryCache; +import org.elasticsearch.index.cache.query.none.NoneQueryCache; + +import java.io.IOException; + +public class IndexCacheModuleTests extends ModuleTestCase { + + public void testCannotRegisterProvidedImplementations() { + IndexCacheModule module = new IndexCacheModule(Settings.EMPTY); + try { + module.registerQueryCache("index", IndexQueryCache.class); + } catch (IllegalArgumentException e) { + assertEquals(e.getMessage(), "Can't register the same [query_cache] more than once for [index]"); + } + + try { + module.registerQueryCache("none", NoneQueryCache.class); + } catch (IllegalArgumentException e) { + assertEquals(e.getMessage(), "Can't register the same [query_cache] more than once for [none]"); + } + } + + public void testRegisterCustomQueryCache() { + IndexCacheModule module = new IndexCacheModule( + Settings.builder().put(IndexCacheModule.QUERY_CACHE_TYPE, "custom").build() + ); + module.registerQueryCache("custom", CustomQueryCache.class); + try { + module.registerQueryCache("custom", CustomQueryCache.class); + } catch (IllegalArgumentException e) { + assertEquals(e.getMessage(), "Can't register the same [query_cache] more than once for [custom]"); + } + assertBinding(module, QueryCache.class, CustomQueryCache.class); + } + + public void testDefaultQueryCacheImplIsSelected() { + IndexCacheModule module = new IndexCacheModule(Settings.EMPTY); + assertBinding(module, QueryCache.class, IndexQueryCache.class); + } + + class CustomQueryCache implements QueryCache { + + @Override + public void clear(String reason) { + } + + @Override + public void close() throws IOException { + } + + @Override + public Index index() { + return new Index("test"); + } + + @Override + public Weight doCache(Weight weight, QueryCachingPolicy policy) { + return weight; + } + } + +} diff --git a/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java b/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java index 974a4a21557..8edc47410a3 100644 --- a/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java +++ b/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java @@ -535,7 +535,7 @@ public class InternalEngineTests extends ESTestCase { } @Override - public IndexSearcher wrap(IndexSearcher searcher) throws EngineException { + public IndexSearcher wrap(EngineConfig engineConfig, IndexSearcher searcher) throws EngineException { counter.incrementAndGet(); return searcher; } diff --git a/core/src/test/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapperTests.java b/core/src/test/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapperTests.java index 2584e861643..1ca1c3a2837 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapperTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapperTests.java @@ -18,7 +18,10 @@ */ package org.elasticsearch.index.mapper.geo; +import org.elasticsearch.Version; +import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.geo.GeoHashUtils; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.index.mapper.DocumentMapper; import org.elasticsearch.index.mapper.DocumentMapperParser; @@ -26,6 +29,7 @@ import org.elasticsearch.index.mapper.MapperParsingException; import org.elasticsearch.index.mapper.MergeResult; import org.elasticsearch.index.mapper.ParsedDocument; import org.elasticsearch.test.ESSingleNodeTestCase; +import org.elasticsearch.test.VersionUtils; import org.junit.Test; import java.util.ArrayList; @@ -138,7 +142,8 @@ public class GeoPointFieldMapperTests extends ESSingleNodeTestCase { public void testNormalizeLatLonValuesDefault() throws Exception { // default to normalize String mapping = XContentFactory.jsonBuilder().startObject().startObject("type") - .startObject("properties").startObject("point").field("type", "geo_point").endObject().endObject() + .startObject("properties").startObject("point").field("type", "geo_point").field("coerce", true) + .field("ignore_malformed", true).endObject().endObject() .endObject().endObject().string(); DocumentMapper defaultMapper = createIndex("test").mapperService().documentMapperParser().parse(mapping); @@ -171,7 +176,8 @@ public class GeoPointFieldMapperTests extends ESSingleNodeTestCase { @Test public void testValidateLatLonValues() throws Exception { String mapping = XContentFactory.jsonBuilder().startObject().startObject("type") - .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("normalize", false).field("validate", true).endObject().endObject() + .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("coerce", false) + .field("ignore_malformed", false).endObject().endObject() .endObject().endObject().string(); DocumentMapper defaultMapper = createIndex("test").mapperService().documentMapperParser().parse(mapping); @@ -231,7 +237,8 @@ public class GeoPointFieldMapperTests extends ESSingleNodeTestCase { @Test public void testNoValidateLatLonValues() throws Exception { String mapping = XContentFactory.jsonBuilder().startObject().startObject("type") - .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("normalize", false).field("validate", false).endObject().endObject() + .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("coerce", false) + .field("ignore_malformed", true).endObject().endObject() .endObject().endObject().string(); DocumentMapper defaultMapper = createIndex("test").mapperService().documentMapperParser().parse(mapping); @@ -472,30 +479,161 @@ public class GeoPointFieldMapperTests extends ESSingleNodeTestCase { assertThat(doc.rootDoc().getFields("point")[1].stringValue(), equalTo("1.4,1.5")); } + + /** + * Test that expected exceptions are thrown when creating a new index with deprecated options + */ + @Test + public void testOptionDeprecation() throws Exception { + DocumentMapperParser parser = createIndex("test").mapperService().documentMapperParser(); + // test deprecation exceptions on newly created indexes + try { + String validateMapping = XContentFactory.jsonBuilder().startObject().startObject("type") + .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true) + .field("validate", true).endObject().endObject() + .endObject().endObject().string(); + parser.parse(validateMapping); + fail("process completed successfully when " + MapperParsingException.class.getName() + " expected"); + } catch (MapperParsingException e) { + assertEquals(e.getMessage(), "Mapping definition for [point] has unsupported parameters: [validate : true]"); + } + + try { + String validateMapping = XContentFactory.jsonBuilder().startObject().startObject("type") + .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true) + .field("validate_lat", true).endObject().endObject() + .endObject().endObject().string(); + parser.parse(validateMapping); + fail("process completed successfully when " + MapperParsingException.class.getName() + " expected"); + } catch (MapperParsingException e) { + assertEquals(e.getMessage(), "Mapping definition for [point] has unsupported parameters: [validate_lat : true]"); + } + + try { + String validateMapping = XContentFactory.jsonBuilder().startObject().startObject("type") + .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true) + .field("validate_lon", true).endObject().endObject() + .endObject().endObject().string(); + parser.parse(validateMapping); + fail("process completed successfully when " + MapperParsingException.class.getName() + " expected"); + } catch (MapperParsingException e) { + assertEquals(e.getMessage(), "Mapping definition for [point] has unsupported parameters: [validate_lon : true]"); + } + + // test deprecated normalize + try { + String normalizeMapping = XContentFactory.jsonBuilder().startObject().startObject("type") + .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true) + .field("normalize", true).endObject().endObject() + .endObject().endObject().string(); + parser.parse(normalizeMapping); + fail("process completed successfully when " + MapperParsingException.class.getName() + " expected"); + } catch (MapperParsingException e) { + assertEquals(e.getMessage(), "Mapping definition for [point] has unsupported parameters: [normalize : true]"); + } + + try { + String normalizeMapping = XContentFactory.jsonBuilder().startObject().startObject("type") + .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true) + .field("normalize_lat", true).endObject().endObject() + .endObject().endObject().string(); + parser.parse(normalizeMapping); + fail("process completed successfully when " + MapperParsingException.class.getName() + " expected"); + } catch (MapperParsingException e) { + assertEquals(e.getMessage(), "Mapping definition for [point] has unsupported parameters: [normalize_lat : true]"); + } + + try { + String normalizeMapping = XContentFactory.jsonBuilder().startObject().startObject("type") + .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true) + .field("normalize_lon", true).endObject().endObject() + .endObject().endObject().string(); + parser.parse(normalizeMapping); + fail("process completed successfully when " + MapperParsingException.class.getName() + " expected"); + } catch (MapperParsingException e) { + assertEquals(e.getMessage(), "Mapping definition for [point] has unsupported parameters: [normalize_lon : true]"); + } + } + + /** + * Test backward compatibility + */ + @Test + public void testBackwardCompatibleOptions() throws Exception { + // backward compatibility testing + Settings settings = Settings.settingsBuilder().put(IndexMetaData.SETTING_VERSION_CREATED, VersionUtils.randomVersionBetween(random(), Version.V_1_0_0, + Version.V_1_7_1)).build(); + + // validate + DocumentMapperParser parser = createIndex("test", settings).mapperService().documentMapperParser(); + String mapping = XContentFactory.jsonBuilder().startObject().startObject("type") + .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true) + .field("validate", false).endObject().endObject() + .endObject().endObject().string(); + parser.parse(mapping); + assertThat(parser.parse(mapping).mapping().toString(), containsString("\"ignore_malformed\":true")); + + mapping = XContentFactory.jsonBuilder().startObject().startObject("type") + .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true) + .field("validate_lat", false).endObject().endObject() + .endObject().endObject().string(); + parser.parse(mapping); + assertThat(parser.parse(mapping).mapping().toString(), containsString("\"ignore_malformed\":true")); + + mapping = XContentFactory.jsonBuilder().startObject().startObject("type") + .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true) + .field("validate_lon", false).endObject().endObject() + .endObject().endObject().string(); + parser.parse(mapping); + assertThat(parser.parse(mapping).mapping().toString(), containsString("\"ignore_malformed\":true")); + + // normalize + mapping = XContentFactory.jsonBuilder().startObject().startObject("type") + .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true) + .field("normalize", true).endObject().endObject() + .endObject().endObject().string(); + parser.parse(mapping); + assertThat(parser.parse(mapping).mapping().toString(), containsString("\"coerce\":true")); + + mapping = XContentFactory.jsonBuilder().startObject().startObject("type") + .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true) + .field("normalize_lat", true).endObject().endObject() + .endObject().endObject().string(); + parser.parse(mapping); + assertThat(parser.parse(mapping).mapping().toString(), containsString("\"coerce\":true")); + + mapping = XContentFactory.jsonBuilder().startObject().startObject("type") + .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true) + .field("normalize_lon", true).endObject().endObject() + .endObject().endObject().string(); + parser.parse(mapping); + assertThat(parser.parse(mapping).mapping().toString(), containsString("\"coerce\":true")); + } + @Test public void testGeoPointMapperMerge() throws Exception { String stage1Mapping = XContentFactory.jsonBuilder().startObject().startObject("type") .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true) - .field("validate", true).endObject().endObject() + .field("ignore_malformed", true).endObject().endObject() .endObject().endObject().string(); DocumentMapperParser parser = createIndex("test").mapperService().documentMapperParser(); DocumentMapper stage1 = parser.parse(stage1Mapping); String stage2Mapping = XContentFactory.jsonBuilder().startObject().startObject("type") - .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true) - .field("validate", false).endObject().endObject() + .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", false).field("geohash", true) + .field("ignore_malformed", false).endObject().endObject() .endObject().endObject().string(); DocumentMapper stage2 = parser.parse(stage2Mapping); MergeResult mergeResult = stage1.merge(stage2.mapping(), false, false); assertThat(mergeResult.hasConflicts(), equalTo(true)); - assertThat(mergeResult.buildConflicts().length, equalTo(2)); + assertThat(mergeResult.buildConflicts().length, equalTo(1)); // todo better way of checking conflict? - assertThat("mapper [point] has different validate_lat", isIn(new ArrayList<>(Arrays.asList(mergeResult.buildConflicts())))); + assertThat("mapper [point] has different lat_lon", isIn(new ArrayList<>(Arrays.asList(mergeResult.buildConflicts())))); // correct mapping and ensure no failures stage2Mapping = XContentFactory.jsonBuilder().startObject().startObject("type") .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true) - .field("validate", true).field("normalize", true).endObject().endObject() + .field("ignore_malformed", true).endObject().endObject() .endObject().endObject().string(); stage2 = parser.parse(stage2Mapping); mergeResult = stage1.merge(stage2.mapping(), false, false); diff --git a/core/src/test/java/org/elasticsearch/index/mapper/geo/GeoPointFieldTypeTests.java b/core/src/test/java/org/elasticsearch/index/mapper/geo/GeoPointFieldTypeTests.java index 8254e0b8bec..07a769faa61 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/geo/GeoPointFieldTypeTests.java +++ b/core/src/test/java/org/elasticsearch/index/mapper/geo/GeoPointFieldTypeTests.java @@ -31,7 +31,7 @@ public class GeoPointFieldTypeTests extends FieldTypeTestCase { @Override protected int numProperties() { - return 6 + super.numProperties(); + return 4 + super.numProperties(); } @Override @@ -40,11 +40,9 @@ public class GeoPointFieldTypeTests extends FieldTypeTestCase { switch (propNum) { case 0: gft.setGeohashEnabled(new StringFieldMapper.StringFieldType(), 1, true); break; case 1: gft.setLatLonEnabled(new DoubleFieldMapper.DoubleFieldType(), new DoubleFieldMapper.DoubleFieldType()); break; - case 2: gft.setValidateLon(!gft.validateLon()); break; - case 3: gft.setValidateLat(!gft.validateLat()); break; - case 4: gft.setNormalizeLon(!gft.normalizeLon()); break; - case 5: gft.setNormalizeLat(!gft.normalizeLat()); break; - default: super.modifyProperty(ft, propNum - 6); + case 2: gft.setIgnoreMalformed(!gft.ignoreMalformed()); break; + case 3: gft.setCoerce(!gft.coerce()); break; + default: super.modifyProperty(ft, propNum - 4); } } } diff --git a/core/src/test/java/org/elasticsearch/index/query/TemplateQueryParserTest.java b/core/src/test/java/org/elasticsearch/index/query/TemplateQueryParserTest.java index aab3c6fe1c4..34c890a5189 100644 --- a/core/src/test/java/org/elasticsearch/index/query/TemplateQueryParserTest.java +++ b/core/src/test/java/org/elasticsearch/index/query/TemplateQueryParserTest.java @@ -21,6 +21,7 @@ package org.elasticsearch.index.query; import org.apache.lucene.search.MatchAllDocsQuery; import org.apache.lucene.search.Query; import org.elasticsearch.Version; +import org.elasticsearch.action.ActionModule; import org.elasticsearch.cluster.ClusterService; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.inject.AbstractModule; @@ -39,15 +40,13 @@ import org.elasticsearch.index.IndexNameModule; import org.elasticsearch.index.analysis.AnalysisModule; import org.elasticsearch.index.cache.IndexCacheModule; import org.elasticsearch.index.query.functionscore.ScoreFunctionParser; -import org.elasticsearch.index.query.functionscore.ScoreFunctionParserMapper; import org.elasticsearch.index.settings.IndexSettingsModule; import org.elasticsearch.index.similarity.SimilarityModule; +import org.elasticsearch.indices.IndicesModule; import org.elasticsearch.indices.analysis.IndicesAnalysisService; import org.elasticsearch.indices.breaker.CircuitBreakerService; import org.elasticsearch.indices.breaker.NoneCircuitBreakerService; -import org.elasticsearch.indices.query.IndicesQueriesModule; import org.elasticsearch.script.ScriptModule; -import org.elasticsearch.search.SearchModule; import org.elasticsearch.test.ESTestCase; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.threadpool.ThreadPoolModule; @@ -80,7 +79,13 @@ public class TemplateQueryParserTest extends ESTestCase { new EnvironmentModule(new Environment(settings)), new SettingsModule(settings), new ThreadPoolModule(new ThreadPool(settings)), - new IndicesQueriesModule(), + new IndicesModule(settings) { + @Override + public void configure() { + // skip services + bindQueryParsersExtension(); + } + }, new ScriptModule(settings), new IndexSettingsModule(index, settings), new IndexCacheModule(settings), diff --git a/core/src/test/java/org/elasticsearch/index/query/plugin/DummyQueryParserPlugin.java b/core/src/test/java/org/elasticsearch/index/query/plugin/DummyQueryParserPlugin.java index 2e0356e4619..cdffcc8c36b 100644 --- a/core/src/test/java/org/elasticsearch/index/query/plugin/DummyQueryParserPlugin.java +++ b/core/src/test/java/org/elasticsearch/index/query/plugin/DummyQueryParserPlugin.java @@ -23,15 +23,13 @@ import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.MatchAllDocsQuery; import org.apache.lucene.search.Query; import org.apache.lucene.search.Weight; -import org.elasticsearch.common.inject.Module; -import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.index.query.QueryParseContext; import org.elasticsearch.index.query.QueryParser; import org.elasticsearch.index.query.QueryParsingException; -import org.elasticsearch.indices.query.IndicesQueriesModule; +import org.elasticsearch.indices.IndicesModule; import org.elasticsearch.plugins.AbstractPlugin; import java.io.IOException; @@ -48,16 +46,8 @@ public class DummyQueryParserPlugin extends AbstractPlugin { return "dummy query"; } - @Override - public void processModule(Module module) { - if (module instanceof IndicesQueriesModule) { - IndicesQueriesModule indicesQueriesModule = (IndicesQueriesModule) module; - indicesQueriesModule.addQuery(DummyQueryParser.class); - } - } - - public Settings settings() { - return Settings.EMPTY; + public void onModule(IndicesModule module) { + module.registerQueryParser(DummyQueryParser.class); } public static class DummyQueryBuilder extends QueryBuilder { diff --git a/core/src/test/java/org/elasticsearch/index/search/geo/GeoUtilsTests.java b/core/src/test/java/org/elasticsearch/index/search/geo/GeoUtilsTests.java index cd9783448eb..326b144dc0c 100644 --- a/core/src/test/java/org/elasticsearch/index/search/geo/GeoUtilsTests.java +++ b/core/src/test/java/org/elasticsearch/index/search/geo/GeoUtilsTests.java @@ -339,34 +339,28 @@ public class GeoUtilsTests extends ESTestCase { @Test public void testNormalizePoint_outsideNormalRange_withOptions() { for (int i = 0; i < 100; i++) { - boolean normLat = randomBoolean(); - boolean normLon = randomBoolean(); + boolean normalize = randomBoolean(); double normalisedLat = (randomDouble() * 180.0) - 90.0; double normalisedLon = (randomDouble() * 360.0) - 180.0; - int shiftLat = randomIntBetween(1, 10000); - int shiftLon = randomIntBetween(1, 10000); - double testLat = normalisedLat + (180.0 * shiftLat); - double testLon = normalisedLon + (360.0 * shiftLon); + int shift = randomIntBetween(1, 10000); + double testLat = normalisedLat + (180.0 * shift); + double testLon = normalisedLon + (360.0 * shift); double expectedLat; double expectedLon; - if (normLat) { - expectedLat = normalisedLat * (shiftLat % 2 == 0 ? 1 : -1); - } else { - expectedLat = testLat; - } - if (normLon) { - expectedLon = normalisedLon + ((normLat && shiftLat % 2 == 1) ? 180 : 0); + if (normalize) { + expectedLat = normalisedLat * (shift % 2 == 0 ? 1 : -1); + expectedLon = normalisedLon + ((shift % 2 == 1) ? 180 : 0); if (expectedLon > 180.0) { expectedLon -= 360; } } else { - double shiftValue = normalisedLon > 0 ? -180 : 180; - expectedLon = testLon + ((normLat && shiftLat % 2 == 1) ? shiftValue : 0); + expectedLat = testLat; + expectedLon = testLon; } GeoPoint testPoint = new GeoPoint(testLat, testLon); GeoPoint expectedPoint = new GeoPoint(expectedLat, expectedLon); - GeoUtils.normalizePoint(testPoint, normLat, normLon); + GeoUtils.normalizePoint(testPoint, normalize, normalize); assertThat("Unexpected Latitude", testPoint.lat(), closeTo(expectedPoint.lat(), MAX_ACCEPTABLE_ERROR)); assertThat("Unexpected Longitude", testPoint.lon(), closeTo(expectedPoint.lon(), MAX_ACCEPTABLE_ERROR)); } diff --git a/core/src/test/java/org/elasticsearch/indices/IndicesModuleTests.java b/core/src/test/java/org/elasticsearch/indices/IndicesModuleTests.java new file mode 100644 index 00000000000..7727d8c1b93 --- /dev/null +++ b/core/src/test/java/org/elasticsearch/indices/IndicesModuleTests.java @@ -0,0 +1,81 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.indices; + +import org.apache.lucene.analysis.hunspell.Dictionary; +import org.apache.lucene.search.Query; +import org.elasticsearch.common.inject.ModuleTestCase; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.query.QueryParseContext; +import org.elasticsearch.index.query.QueryParser; +import org.elasticsearch.index.query.QueryParsingException; +import org.elasticsearch.index.query.TermQueryParser; + +import java.io.IOException; +import java.io.InputStream; +import java.util.Collections; + +public class IndicesModuleTests extends ModuleTestCase { + + static class FakeQueryParser implements QueryParser { + @Override + public String[] names() { + return new String[] {"fake-query-parser"}; + } + @Override + public Query parse(QueryParseContext parseContext) throws IOException, QueryParsingException { + return null; + } + } + + public void testRegisterQueryParser() { + IndicesModule module = new IndicesModule(Settings.EMPTY); + module.registerQueryParser(FakeQueryParser.class); + assertSetMultiBinding(module, QueryParser.class, FakeQueryParser.class); + } + + public void testRegisterQueryParserDuplicate() { + IndicesModule module = new IndicesModule(Settings.EMPTY); + try { + module.registerQueryParser(TermQueryParser.class); + } catch (IllegalArgumentException e) { + assertEquals(e.getMessage(), "Can't register the same [query_parser] more than once for [" + TermQueryParser.class.getName() + "]"); + } + } + + public void testRegisterHunspellDictionary() throws Exception { + IndicesModule module = new IndicesModule(Settings.EMPTY); + InputStream aff = getClass().getResourceAsStream("/indices/analyze/conf_dir/hunspell/en_US/en_US.aff"); + InputStream dic = getClass().getResourceAsStream("/indices/analyze/conf_dir/hunspell/en_US/en_US.dic"); + Dictionary dictionary = new Dictionary(aff, dic); + module.registerHunspellDictionary("foo", dictionary); + assertMapInstanceBinding(module, String.class, Dictionary.class, Collections.singletonMap("foo", dictionary)); + } + + public void testRegisterHunspellDictionaryDuplicate() { + IndicesModule module = new IndicesModule(Settings.EMPTY); + try { + module.registerQueryParser(TermQueryParser.class); + } catch (IllegalArgumentException e) { + assertEquals(e.getMessage(), "Can't register the same [query_parser] more than once for [" + TermQueryParser.class.getName() + "]"); + } + } + +} diff --git a/core/src/test/java/org/elasticsearch/indices/recovery/RecoveryStatusTests.java b/core/src/test/java/org/elasticsearch/indices/recovery/RecoveryStatusTests.java index 35847f51ab7..af5c2ef8b09 100644 --- a/core/src/test/java/org/elasticsearch/indices/recovery/RecoveryStatusTests.java +++ b/core/src/test/java/org/elasticsearch/indices/recovery/RecoveryStatusTests.java @@ -56,6 +56,13 @@ public class RecoveryStatusTests extends ESSingleNodeTestCase { assertSame(openIndexOutput, indexOutput); openIndexOutput.writeInt(1); } + try { + status.openAndPutIndexOutput("foo.bar", new StoreFileMetaData("foo.bar", 8), status.store()); + fail("file foo.bar is already opened and registered"); + } catch (IllegalStateException ex) { + assertEquals("output for file [foo.bar] has already been created", ex.getMessage()); + // all well = it's already registered + } status.removeOpenIndexOutputs("foo.bar"); Set strings = Sets.newHashSet(status.store().directory().listAll()); String expectedFile = null; diff --git a/core/src/test/java/org/elasticsearch/indices/template/SimpleIndexTemplateIT.java b/core/src/test/java/org/elasticsearch/indices/template/SimpleIndexTemplateIT.java index 1cbe5eb56dc..b8185250761 100644 --- a/core/src/test/java/org/elasticsearch/indices/template/SimpleIndexTemplateIT.java +++ b/core/src/test/java/org/elasticsearch/indices/template/SimpleIndexTemplateIT.java @@ -344,7 +344,7 @@ public class SimpleIndexTemplateIT extends ESIntegTestCase { .addAlias(new Alias("templated_alias-{index}")) .addAlias(new Alias("filtered_alias").filter("{\"type\":{\"value\":\"type2\"}}")) .addAlias(new Alias("complex_filtered_alias") - .filter(QueryBuilders.termsQuery("_type", "typeX", "typeY", "typeZ").execution("bool"))) + .filter(QueryBuilders.termsQuery("_type", "typeX", "typeY", "typeZ"))) .get(); assertAcked(prepareCreate("test_index").addMapping("type1").addMapping("type2").addMapping("typeX").addMapping("typeY").addMapping("typeZ")); diff --git a/core/src/test/java/org/elasticsearch/monitor/os/OsProbeTests.java b/core/src/test/java/org/elasticsearch/monitor/os/OsProbeTests.java index d5d14e993f8..6b8818a4931 100644 --- a/core/src/test/java/org/elasticsearch/monitor/os/OsProbeTests.java +++ b/core/src/test/java/org/elasticsearch/monitor/os/OsProbeTests.java @@ -45,18 +45,34 @@ public class OsProbeTests extends ESTestCase { OsStats stats = probe.osStats(); assertNotNull(stats); assertThat(stats.getTimestamp(), greaterThan(0L)); - assertThat(stats.getLoadAverage(), anyOf(equalTo((double) -1), greaterThanOrEqualTo((double) 0))); + if (Constants.WINDOWS) { + // Load average is always -1 on Windows platforms + assertThat(stats.getLoadAverage(), equalTo((double) -1)); + } else { + // Load average can be negative if not available or not computed yet, otherwise it should be >= 0 + assertThat(stats.getLoadAverage(), anyOf(lessThan((double) 0), greaterThanOrEqualTo((double) 0))); + } assertNotNull(stats.getMem()); - assertThat(stats.getMem().getTotal().bytes(), anyOf(equalTo(-1L), greaterThan(0L))); - assertThat(stats.getMem().getFree().bytes(), anyOf(equalTo(-1L), greaterThan(0L))); + assertThat(stats.getMem().getTotal().bytes(), greaterThan(0L)); + assertThat(stats.getMem().getFree().bytes(), greaterThan(0L)); assertThat(stats.getMem().getFreePercent(), allOf(greaterThanOrEqualTo((short) 0), lessThanOrEqualTo((short) 100))); - assertThat(stats.getMem().getUsed().bytes(), anyOf(equalTo(-1L), greaterThanOrEqualTo(0L))); + assertThat(stats.getMem().getUsed().bytes(), greaterThan(0L)); assertThat(stats.getMem().getUsedPercent(), allOf(greaterThanOrEqualTo((short) 0), lessThanOrEqualTo((short) 100))); assertNotNull(stats.getSwap()); - assertThat(stats.getSwap().getTotal().bytes(), anyOf(equalTo(-1L), greaterThanOrEqualTo(0L))); - assertThat(stats.getSwap().getFree().bytes(), anyOf(equalTo(-1L), greaterThanOrEqualTo(0L))); - assertThat(stats.getSwap().getUsed().bytes(), anyOf(equalTo(-1L), greaterThanOrEqualTo(0L))); + assertNotNull(stats.getSwap().getTotal()); + + long total = stats.getSwap().getTotal().bytes(); + if (total > 0) { + assertThat(stats.getSwap().getTotal().bytes(), greaterThan(0L)); + assertThat(stats.getSwap().getFree().bytes(), greaterThan(0L)); + assertThat(stats.getSwap().getUsed().bytes(), greaterThanOrEqualTo(0L)); + } else { + // On platforms with no swap + assertThat(stats.getSwap().getTotal().bytes(), equalTo(0L)); + assertThat(stats.getSwap().getFree().bytes(), equalTo(0L)); + assertThat(stats.getSwap().getUsed().bytes(), equalTo(0L)); + } } } diff --git a/core/src/test/java/org/elasticsearch/monitor/process/ProcessProbeTests.java b/core/src/test/java/org/elasticsearch/monitor/process/ProcessProbeTests.java index 910e51fddad..449ea124afe 100644 --- a/core/src/test/java/org/elasticsearch/monitor/process/ProcessProbeTests.java +++ b/core/src/test/java/org/elasticsearch/monitor/process/ProcessProbeTests.java @@ -19,6 +19,7 @@ package org.elasticsearch.monitor.process; +import org.apache.lucene.util.Constants; import org.elasticsearch.bootstrap.Bootstrap; import org.elasticsearch.test.ESTestCase; import org.junit.Test; @@ -43,14 +44,29 @@ public class ProcessProbeTests extends ESTestCase { public void testProcessStats() { ProcessStats stats = probe.processStats(); assertNotNull(stats); + assertThat(stats.getTimestamp(), greaterThan(0L)); + + if (Constants.WINDOWS) { + // Open/Max files descriptors are not supported on Windows platforms + assertThat(stats.getOpenFileDescriptors(), equalTo(-1L)); + assertThat(stats.getMaxFileDescriptors(), equalTo(-1L)); + } else { + assertThat(stats.getOpenFileDescriptors(), greaterThan(0L)); + assertThat(stats.getMaxFileDescriptors(), greaterThan(0L)); + } ProcessStats.Cpu cpu = stats.getCpu(); assertNotNull(cpu); - assertThat(cpu.getPercent(), greaterThanOrEqualTo((short) 0)); - assertThat(cpu.total, anyOf(equalTo(-1L), greaterThan(0L))); + + // CPU percent can be negative if the system recent cpu usage is not available + assertThat(cpu.getPercent(), anyOf(lessThan((short) 0), allOf(greaterThanOrEqualTo((short) 0), lessThanOrEqualTo((short) 100)))); + + // CPU time can return -1 if the the platform does not support this operation, let's see which platforms fail + assertThat(cpu.total, greaterThan(0L)); ProcessStats.Mem mem = stats.getMem(); assertNotNull(mem); - assertThat(mem.totalVirtual, anyOf(equalTo(-1L), greaterThan(0L))); + // Commited total virtual memory can return -1 if not supported, let's see which platforms fail + assertThat(mem.totalVirtual, greaterThan(0L)); } } diff --git a/core/src/test/java/org/elasticsearch/plugins/PluginManagerIT.java b/core/src/test/java/org/elasticsearch/plugins/PluginManagerIT.java index bb2b01ca979..4acb818c180 100644 --- a/core/src/test/java/org/elasticsearch/plugins/PluginManagerIT.java +++ b/core/src/test/java/org/elasticsearch/plugins/PluginManagerIT.java @@ -52,9 +52,7 @@ import javax.net.ssl.HttpsURLConnection; import javax.net.ssl.SSLContext; import javax.net.ssl.SSLSocketFactory; import java.io.BufferedWriter; -import java.io.FileOutputStream; import java.io.IOException; -import java.io.PrintStream; import java.net.InetSocketAddress; import java.nio.charset.StandardCharsets; import java.nio.file.FileVisitResult; @@ -539,17 +537,19 @@ public class PluginManagerIT extends ESIntegTestCase { @Test public void testOfficialPluginName_ThrowsException() throws IOException { - PluginManager.checkForOfficialPlugins("elasticsearch-analysis-icu"); - PluginManager.checkForOfficialPlugins("elasticsearch-analysis-kuromoji"); - PluginManager.checkForOfficialPlugins("elasticsearch-analysis-phonetic"); - PluginManager.checkForOfficialPlugins("elasticsearch-analysis-smartcn"); - PluginManager.checkForOfficialPlugins("elasticsearch-analysis-stempel"); - PluginManager.checkForOfficialPlugins("elasticsearch-cloud-aws"); - PluginManager.checkForOfficialPlugins("elasticsearch-cloud-azure"); - PluginManager.checkForOfficialPlugins("elasticsearch-cloud-gce"); - PluginManager.checkForOfficialPlugins("elasticsearch-delete-by-query"); - PluginManager.checkForOfficialPlugins("elasticsearch-lang-javascript"); - PluginManager.checkForOfficialPlugins("elasticsearch-lang-python"); + PluginManager.checkForOfficialPlugins("analysis-icu"); + PluginManager.checkForOfficialPlugins("analysis-kuromoji"); + PluginManager.checkForOfficialPlugins("analysis-phonetic"); + PluginManager.checkForOfficialPlugins("analysis-smartcn"); + PluginManager.checkForOfficialPlugins("analysis-stempel"); + PluginManager.checkForOfficialPlugins("cloud-aws"); + PluginManager.checkForOfficialPlugins("cloud-azure"); + PluginManager.checkForOfficialPlugins("cloud-gce"); + PluginManager.checkForOfficialPlugins("delete-by-query"); + PluginManager.checkForOfficialPlugins("lang-javascript"); + PluginManager.checkForOfficialPlugins("lang-python"); + PluginManager.checkForOfficialPlugins("mapper-murmur3"); + PluginManager.checkForOfficialPlugins("mapper-size"); try { PluginManager.checkForOfficialPlugins("elasticsearch-mapper-attachment"); diff --git a/core/src/test/java/org/elasticsearch/plugins/PluginManagerUnitTests.java b/core/src/test/java/org/elasticsearch/plugins/PluginManagerUnitTests.java index d4986d54fbc..6ee0fc8b8b8 100644 --- a/core/src/test/java/org/elasticsearch/plugins/PluginManagerUnitTests.java +++ b/core/src/test/java/org/elasticsearch/plugins/PluginManagerUnitTests.java @@ -62,7 +62,7 @@ public class PluginManagerUnitTests extends ESTestCase { .build(); Environment environment = new Environment(settings); - PluginManager.PluginHandle pluginHandle = new PluginManager.PluginHandle(pluginName, "version", "user", "repo"); + PluginManager.PluginHandle pluginHandle = new PluginManager.PluginHandle(pluginName, "version", "user"); String configDirPath = Files.simplifyPath(pluginHandle.configDir(environment).normalize().toString()); String expectedDirPath = Files.simplifyPath(genericConfigFolder.resolve(pluginName).normalize().toString()); @@ -82,23 +82,23 @@ public class PluginManagerUnitTests extends ESTestCase { Iterator iterator = handle.urls().iterator(); if (supportStagingUrls) { - String expectedStagingURL = String.format(Locale.ROOT, "http://download.elastic.co/elasticsearch/staging/%s/org/elasticsearch/plugin/elasticsearch-%s/%s/elasticsearch-%s-%s.zip", - Build.CURRENT.hashShort(), pluginName, Version.CURRENT.number(), pluginName, Version.CURRENT.number()); - assertThat(iterator.next(), is(new URL(expectedStagingURL))); + String expectedStagingURL = String.format(Locale.ROOT, "http://download.elastic.co/elasticsearch/staging/%s-%s/org/elasticsearch/plugin/%s/%s/%s-%s.zip", + Version.CURRENT.number(), Build.CURRENT.hashShort(), pluginName, Version.CURRENT.number(), pluginName, Version.CURRENT.number()); + assertThat(iterator.next().toExternalForm(), is(expectedStagingURL)); } - URL expected = new URL("http", "download.elastic.co", "/elasticsearch/release/org/elasticsearch/plugin/elasticsearch-" + pluginName + "/" + Version.CURRENT.number() + "/elasticsearch-" + + URL expected = new URL("http", "download.elastic.co", "/elasticsearch/release/org/elasticsearch/plugin/" + pluginName + "/" + Version.CURRENT.number() + "/" + pluginName + "-" + Version.CURRENT.number() + ".zip"); - assertThat(iterator.next(), is(expected)); + assertThat(iterator.next().toExternalForm(), is(expected.toExternalForm())); assertThat(iterator.hasNext(), is(false)); } @Test - public void testTrimmingElasticsearchFromOfficialPluginName() throws IOException { - String randomPluginName = randomFrom(PluginManager.OFFICIAL_PLUGINS.asList()).replaceFirst("elasticsearch-", ""); + public void testOfficialPluginName() throws IOException { + String randomPluginName = randomFrom(PluginManager.OFFICIAL_PLUGINS.asList()); PluginManager.PluginHandle handle = PluginManager.PluginHandle.parse(randomPluginName); - assertThat(handle.name, is(randomPluginName.replaceAll("^elasticsearch-", ""))); + assertThat(handle.name, is(randomPluginName)); boolean supportStagingUrls = randomBoolean(); if (supportStagingUrls) { @@ -108,28 +108,26 @@ public class PluginManagerUnitTests extends ESTestCase { Iterator iterator = handle.urls().iterator(); if (supportStagingUrls) { - String expectedStagingUrl = String.format(Locale.ROOT, "http://download.elastic.co/elasticsearch/staging/%s/org/elasticsearch/plugin/elasticsearch-%s/%s/elasticsearch-%s-%s.zip", - Build.CURRENT.hashShort(), randomPluginName, Version.CURRENT.number(), randomPluginName, Version.CURRENT.number()); - assertThat(iterator.next(), is(new URL(expectedStagingUrl))); + String expectedStagingUrl = String.format(Locale.ROOT, "http://download.elastic.co/elasticsearch/staging/%s-%s/org/elasticsearch/plugin/%s/%s/%s-%s.zip", + Version.CURRENT.number(), Build.CURRENT.hashShort(), randomPluginName, Version.CURRENT.number(), randomPluginName, Version.CURRENT.number()); + assertThat(iterator.next().toExternalForm(), is(expectedStagingUrl)); } - String releaseUrl = String.format(Locale.ROOT, "http://download.elastic.co/elasticsearch/release/org/elasticsearch/plugin/elasticsearch-%s/%s/elasticsearch-%s-%s.zip", + String releaseUrl = String.format(Locale.ROOT, "http://download.elastic.co/elasticsearch/release/org/elasticsearch/plugin/%s/%s/%s-%s.zip", randomPluginName, Version.CURRENT.number(), randomPluginName, Version.CURRENT.number()); - assertThat(iterator.next(), is(new URL(releaseUrl))); + assertThat(iterator.next().toExternalForm(), is(releaseUrl)); assertThat(iterator.hasNext(), is(false)); } @Test - public void testTrimmingElasticsearchFromGithubPluginName() throws IOException { + public void testGithubPluginName() throws IOException { String user = randomAsciiOfLength(6); - String randomName = randomAsciiOfLength(10); - String pluginName = randomFrom("elasticsearch-", "es-") + randomName; + String pluginName = randomAsciiOfLength(10); PluginManager.PluginHandle handle = PluginManager.PluginHandle.parse(user + "/" + pluginName); - assertThat(handle.name, is(randomName)); + assertThat(handle.name, is(pluginName)); assertThat(handle.urls(), hasSize(1)); - URL expected = new URL("https", "github.com", "/" + user + "/" + pluginName + "/" + "archive/master.zip"); - assertThat(handle.urls().get(0), is(expected)); + assertThat(handle.urls().get(0).toExternalForm(), is(new URL("https", "github.com", "/" + user + "/" + pluginName + "/" + "archive/master.zip").toExternalForm())); } @Test diff --git a/core/src/test/java/org/elasticsearch/recovery/RelocationIT.java b/core/src/test/java/org/elasticsearch/recovery/RelocationIT.java index c73baac193e..53a71e1dc7b 100644 --- a/core/src/test/java/org/elasticsearch/recovery/RelocationIT.java +++ b/core/src/test/java/org/elasticsearch/recovery/RelocationIT.java @@ -402,7 +402,7 @@ public class RelocationIT extends ESIntegTestCase { // Slow down recovery in order to make recovery cancellations more likely IndicesStatsResponse statsResponse = client().admin().indices().prepareStats(indexName).get(); - long chunkSize = statsResponse.getIndex(indexName).getShards()[0].getStats().getStore().size().bytes() / 10; + long chunkSize = Math.max(1, statsResponse.getIndex(indexName).getShards()[0].getStats().getStore().size().bytes() / 10); assertTrue(client().admin().cluster().prepareUpdateSettings() .setTransientSettings(Settings.builder() // one chunk per sec.. diff --git a/core/src/test/java/org/elasticsearch/search/aggregations/metrics/CardinalityIT.java b/core/src/test/java/org/elasticsearch/search/aggregations/metrics/CardinalityIT.java index 491e4f694c9..d77e4d1ccd0 100644 --- a/core/src/test/java/org/elasticsearch/search/aggregations/metrics/CardinalityIT.java +++ b/core/src/test/java/org/elasticsearch/search/aggregations/metrics/CardinalityIT.java @@ -61,54 +61,23 @@ public class CardinalityIT extends ESIntegTestCase { jsonBuilder().startObject().startObject("type").startObject("properties") .startObject("str_value") .field("type", "string") - .startObject("fields") - .startObject("hash") - .field("type", "murmur3") - .endObject() - .endObject() .endObject() .startObject("str_values") .field("type", "string") - .startObject("fields") - .startObject("hash") - .field("type", "murmur3") - .endObject() - .endObject() .endObject() .startObject("l_value") .field("type", "long") - .startObject("fields") - .startObject("hash") - .field("type", "murmur3") - .endObject() - .endObject() .endObject() .startObject("l_values") .field("type", "long") - .startObject("fields") - .startObject("hash") - .field("type", "murmur3") - .endObject() - .endObject() .endObject() - .startObject("d_value") - .field("type", "double") - .startObject("fields") - .startObject("hash") - .field("type", "murmur3") - .endObject() - .endObject() - .endObject() - .startObject("d_values") - .field("type", "double") - .startObject("fields") - .startObject("hash") - .field("type", "murmur3") - .endObject() - .endObject() - .endObject() - .endObject() - .endObject().endObject()).execute().actionGet(); + .startObject("d_value") + .field("type", "double") + .endObject() + .startObject("d_values") + .field("type", "double") + .endObject() + .endObject().endObject().endObject()).execute().actionGet(); numDocs = randomIntBetween(2, 100); precisionThreshold = randomIntBetween(0, 1 << randomInt(20)); @@ -145,12 +114,12 @@ public class CardinalityIT extends ESIntegTestCase { assertThat(count.getValue(), greaterThan(0L)); } } - private String singleNumericField(boolean hash) { - return (randomBoolean() ? "l_value" : "d_value") + (hash ? ".hash" : ""); + private String singleNumericField() { + return randomBoolean() ? "l_value" : "d_value"; } private String multiNumericField(boolean hash) { - return (randomBoolean() ? "l_values" : "d_values") + (hash ? ".hash" : ""); + return randomBoolean() ? "l_values" : "d_values"; } @Test @@ -195,24 +164,10 @@ public class CardinalityIT extends ESIntegTestCase { assertCount(count, numDocs); } - @Test - public void singleValuedStringHashed() throws Exception { - SearchResponse response = client().prepareSearch("idx").setTypes("type") - .addAggregation(cardinality("cardinality").precisionThreshold(precisionThreshold).field("str_value.hash")) - .execute().actionGet(); - - assertSearchResponse(response); - - Cardinality count = response.getAggregations().get("cardinality"); - assertThat(count, notNullValue()); - assertThat(count.getName(), equalTo("cardinality")); - assertCount(count, numDocs); - } - @Test public void singleValuedNumeric() throws Exception { SearchResponse response = client().prepareSearch("idx").setTypes("type") - .addAggregation(cardinality("cardinality").precisionThreshold(precisionThreshold).field(singleNumericField(false))) + .addAggregation(cardinality("cardinality").precisionThreshold(precisionThreshold).field(singleNumericField())) .execute().actionGet(); assertSearchResponse(response); @@ -229,7 +184,7 @@ public class CardinalityIT extends ESIntegTestCase { SearchResponse searchResponse = client().prepareSearch("idx").setQuery(matchAllQuery()) .addAggregation( global("global").subAggregation( - cardinality("cardinality").precisionThreshold(precisionThreshold).field(singleNumericField(false)))) + cardinality("cardinality").precisionThreshold(precisionThreshold).field(singleNumericField()))) .execute().actionGet(); assertSearchResponse(searchResponse); @@ -254,7 +209,7 @@ public class CardinalityIT extends ESIntegTestCase { @Test public void singleValuedNumericHashed() throws Exception { SearchResponse response = client().prepareSearch("idx").setTypes("type") - .addAggregation(cardinality("cardinality").precisionThreshold(precisionThreshold).field(singleNumericField(true))) + .addAggregation(cardinality("cardinality").precisionThreshold(precisionThreshold).field(singleNumericField())) .execute().actionGet(); assertSearchResponse(response); @@ -279,20 +234,6 @@ public class CardinalityIT extends ESIntegTestCase { assertCount(count, numDocs * 2); } - @Test - public void multiValuedStringHashed() throws Exception { - SearchResponse response = client().prepareSearch("idx").setTypes("type") - .addAggregation(cardinality("cardinality").precisionThreshold(precisionThreshold).field("str_values.hash")) - .execute().actionGet(); - - assertSearchResponse(response); - - Cardinality count = response.getAggregations().get("cardinality"); - assertThat(count, notNullValue()); - assertThat(count.getName(), equalTo("cardinality")); - assertCount(count, numDocs * 2); - } - @Test public void multiValuedNumeric() throws Exception { SearchResponse response = client().prepareSearch("idx").setTypes("type") @@ -356,7 +297,7 @@ public class CardinalityIT extends ESIntegTestCase { SearchResponse response = client().prepareSearch("idx").setTypes("type") .addAggregation( cardinality("cardinality").precisionThreshold(precisionThreshold).script( - new Script("doc['" + singleNumericField(false) + "'].value"))) + new Script("doc['" + singleNumericField() + "'].value"))) .execute().actionGet(); assertSearchResponse(response); @@ -417,7 +358,7 @@ public class CardinalityIT extends ESIntegTestCase { public void singleValuedNumericValueScript() throws Exception { SearchResponse response = client().prepareSearch("idx").setTypes("type") .addAggregation( - cardinality("cardinality").precisionThreshold(precisionThreshold).field(singleNumericField(false)) + cardinality("cardinality").precisionThreshold(precisionThreshold).field(singleNumericField()) .script(new Script("_value"))) .execute().actionGet(); @@ -464,23 +405,4 @@ public class CardinalityIT extends ESIntegTestCase { } } - @Test - public void asSubAggHashed() throws Exception { - SearchResponse response = client().prepareSearch("idx").setTypes("type") - .addAggregation(terms("terms").field("str_value") - .collectMode(randomFrom(SubAggCollectionMode.values())) - .subAggregation(cardinality("cardinality").precisionThreshold(precisionThreshold).field("str_values.hash"))) - .execute().actionGet(); - - assertSearchResponse(response); - - Terms terms = response.getAggregations().get("terms"); - for (Terms.Bucket bucket : terms.getBuckets()) { - Cardinality count = bucket.getAggregations().get("cardinality"); - assertThat(count, notNullValue()); - assertThat(count.getName(), equalTo("cardinality")); - assertCount(count, 2); - } - } - } diff --git a/core/src/test/java/org/elasticsearch/search/functionscore/DecayFunctionScoreIT.java b/core/src/test/java/org/elasticsearch/search/functionscore/DecayFunctionScoreIT.java index 0848c89ac3e..2d2d72822b8 100644 --- a/core/src/test/java/org/elasticsearch/search/functionscore/DecayFunctionScoreIT.java +++ b/core/src/test/java/org/elasticsearch/search/functionscore/DecayFunctionScoreIT.java @@ -574,7 +574,8 @@ public class DecayFunctionScoreIT extends ESIntegTestCase { "type", jsonBuilder().startObject().startObject("type").startObject("properties").startObject("test").field("type", "string") .endObject().startObject("date").field("type", "date").endObject().startObject("num").field("type", "double") - .endObject().startObject("geo").field("type", "geo_point").endObject().endObject().endObject().endObject())); + .endObject().startObject("geo").field("type", "geo_point").field("coerce", true).endObject().endObject() + .endObject().endObject())); ensureYellow(); int numDocs = 200; List indexBuilders = new ArrayList<>(); diff --git a/core/src/test/java/org/elasticsearch/search/geo/GeoBoundingBoxIT.java b/core/src/test/java/org/elasticsearch/search/geo/GeoBoundingBoxIT.java index cc8471519db..ef82b3b39df 100644 --- a/core/src/test/java/org/elasticsearch/search/geo/GeoBoundingBoxIT.java +++ b/core/src/test/java/org/elasticsearch/search/geo/GeoBoundingBoxIT.java @@ -289,50 +289,50 @@ public class GeoBoundingBoxIT extends ESIntegTestCase { SearchResponse searchResponse = client().prepareSearch() .setQuery( filteredQuery(matchAllQuery(), - geoBoundingBoxQuery("location").topLeft(50, -180).bottomRight(-50, 180)) + geoBoundingBoxQuery("location").coerce(true).topLeft(50, -180).bottomRight(-50, 180)) ).execute().actionGet(); assertThat(searchResponse.getHits().totalHits(), equalTo(1l)); searchResponse = client().prepareSearch() .setQuery( filteredQuery(matchAllQuery(), - geoBoundingBoxQuery("location").topLeft(50, -180).bottomRight(-50, 180).type("indexed")) + geoBoundingBoxQuery("location").coerce(true).topLeft(50, -180).bottomRight(-50, 180).type("indexed")) ).execute().actionGet(); assertThat(searchResponse.getHits().totalHits(), equalTo(1l)); searchResponse = client().prepareSearch() .setQuery( filteredQuery(matchAllQuery(), - geoBoundingBoxQuery("location").topLeft(90, -180).bottomRight(-90, 180)) + geoBoundingBoxQuery("location").coerce(true).topLeft(90, -180).bottomRight(-90, 180)) ).execute().actionGet(); assertThat(searchResponse.getHits().totalHits(), equalTo(2l)); searchResponse = client().prepareSearch() .setQuery( filteredQuery(matchAllQuery(), - geoBoundingBoxQuery("location").topLeft(90, -180).bottomRight(-90, 180).type("indexed")) + geoBoundingBoxQuery("location").coerce(true).topLeft(90, -180).bottomRight(-90, 180).type("indexed")) ).execute().actionGet(); assertThat(searchResponse.getHits().totalHits(), equalTo(2l)); searchResponse = client().prepareSearch() .setQuery( filteredQuery(matchAllQuery(), - geoBoundingBoxQuery("location").topLeft(50, 0).bottomRight(-50, 360)) + geoBoundingBoxQuery("location").coerce(true).topLeft(50, 0).bottomRight(-50, 360)) ).execute().actionGet(); assertThat(searchResponse.getHits().totalHits(), equalTo(1l)); searchResponse = client().prepareSearch() .setQuery( filteredQuery(matchAllQuery(), - geoBoundingBoxQuery("location").topLeft(50, 0).bottomRight(-50, 360).type("indexed")) + geoBoundingBoxQuery("location").coerce(true).topLeft(50, 0).bottomRight(-50, 360).type("indexed")) ).execute().actionGet(); assertThat(searchResponse.getHits().totalHits(), equalTo(1l)); searchResponse = client().prepareSearch() .setQuery( filteredQuery(matchAllQuery(), - geoBoundingBoxQuery("location").topLeft(90, 0).bottomRight(-90, 360)) + geoBoundingBoxQuery("location").coerce(true).topLeft(90, 0).bottomRight(-90, 360)) ).execute().actionGet(); assertThat(searchResponse.getHits().totalHits(), equalTo(2l)); searchResponse = client().prepareSearch() .setQuery( filteredQuery(matchAllQuery(), - geoBoundingBoxQuery("location").topLeft(90, 0).bottomRight(-90, 360).type("indexed")) + geoBoundingBoxQuery("location").coerce(true).topLeft(90, 0).bottomRight(-90, 360).type("indexed")) ).execute().actionGet(); assertThat(searchResponse.getHits().totalHits(), equalTo(2l)); } diff --git a/core/src/test/java/org/elasticsearch/search/geo/GeoDistanceIT.java b/core/src/test/java/org/elasticsearch/search/geo/GeoDistanceIT.java index b1e959a5821..69e6ac7df0b 100644 --- a/core/src/test/java/org/elasticsearch/search/geo/GeoDistanceIT.java +++ b/core/src/test/java/org/elasticsearch/search/geo/GeoDistanceIT.java @@ -221,8 +221,8 @@ public class GeoDistanceIT extends ESIntegTestCase { public void testDistanceSortingMVFields() throws Exception { XContentBuilder xContentBuilder = XContentFactory.jsonBuilder().startObject().startObject("type1") .startObject("properties").startObject("locations").field("type", "geo_point").field("lat_lon", true) - .startObject("fielddata").field("format", randomNumericFieldDataFormat()).endObject().endObject().endObject() - .endObject().endObject(); + .field("ignore_malformed", true).field("coerce", true).startObject("fielddata") + .field("format", randomNumericFieldDataFormat()).endObject().endObject().endObject().endObject().endObject(); assertAcked(prepareCreate("test") .addMapping("type1", xContentBuilder)); ensureGreen(); @@ -233,6 +233,11 @@ public class GeoDistanceIT extends ESIntegTestCase { .endObject()).execute().actionGet(); client().prepareIndex("test", "type1", "2").setSource(jsonBuilder().startObject() + .field("names", "New York 2") + .startObject("locations").field("lat", 400.7143528).field("lon", 285.9990269).endObject() + .endObject()).execute().actionGet(); + + client().prepareIndex("test", "type1", "3").setSource(jsonBuilder().startObject() .field("names", "Times Square", "Tribeca") .startArray("locations") // to NY: 5.286 km @@ -242,7 +247,7 @@ public class GeoDistanceIT extends ESIntegTestCase { .endArray() .endObject()).execute().actionGet(); - client().prepareIndex("test", "type1", "3").setSource(jsonBuilder().startObject() + client().prepareIndex("test", "type1", "4").setSource(jsonBuilder().startObject() .field("names", "Wall Street", "Soho") .startArray("locations") // to NY: 1.055 km @@ -253,7 +258,7 @@ public class GeoDistanceIT extends ESIntegTestCase { .endObject()).execute().actionGet(); - client().prepareIndex("test", "type1", "4").setSource(jsonBuilder().startObject() + client().prepareIndex("test", "type1", "5").setSource(jsonBuilder().startObject() .field("names", "Greenwich Village", "Brooklyn") .startArray("locations") // to NY: 2.029 km @@ -270,70 +275,76 @@ public class GeoDistanceIT extends ESIntegTestCase { .addSort(SortBuilders.geoDistanceSort("locations").point(40.7143528, -74.0059731).order(SortOrder.ASC)) .execute().actionGet(); - assertHitCount(searchResponse, 4); - assertOrderedSearchHits(searchResponse, "1", "2", "3", "4"); + assertHitCount(searchResponse, 5); + assertOrderedSearchHits(searchResponse, "1", "2", "3", "4", "5"); assertThat(((Number) searchResponse.getHits().getAt(0).sortValues()[0]).doubleValue(), closeTo(0d, 10d)); - assertThat(((Number) searchResponse.getHits().getAt(1).sortValues()[0]).doubleValue(), closeTo(462.1d, 10d)); - assertThat(((Number) searchResponse.getHits().getAt(2).sortValues()[0]).doubleValue(), closeTo(1055.0d, 10d)); - assertThat(((Number) searchResponse.getHits().getAt(3).sortValues()[0]).doubleValue(), closeTo(2029.0d, 10d)); + assertThat(((Number) searchResponse.getHits().getAt(1).sortValues()[0]).doubleValue(), closeTo(421.2d, 10d)); + assertThat(((Number) searchResponse.getHits().getAt(2).sortValues()[0]).doubleValue(), closeTo(462.1d, 10d)); + assertThat(((Number) searchResponse.getHits().getAt(3).sortValues()[0]).doubleValue(), closeTo(1055.0d, 10d)); + assertThat(((Number) searchResponse.getHits().getAt(4).sortValues()[0]).doubleValue(), closeTo(2029.0d, 10d)); // Order: Asc, Mode: max searchResponse = client().prepareSearch("test").setQuery(matchAllQuery()) .addSort(SortBuilders.geoDistanceSort("locations").point(40.7143528, -74.0059731).order(SortOrder.ASC).sortMode("max")) .execute().actionGet(); - assertHitCount(searchResponse, 4); - assertOrderedSearchHits(searchResponse, "1", "3", "2", "4"); + assertHitCount(searchResponse, 5); + assertOrderedSearchHits(searchResponse, "1", "2", "4", "3", "5"); assertThat(((Number) searchResponse.getHits().getAt(0).sortValues()[0]).doubleValue(), closeTo(0d, 10d)); - assertThat(((Number) searchResponse.getHits().getAt(1).sortValues()[0]).doubleValue(), closeTo(1258.0d, 10d)); - assertThat(((Number) searchResponse.getHits().getAt(2).sortValues()[0]).doubleValue(), closeTo(5286.0d, 10d)); - assertThat(((Number) searchResponse.getHits().getAt(3).sortValues()[0]).doubleValue(), closeTo(8572.0d, 10d)); + assertThat(((Number) searchResponse.getHits().getAt(1).sortValues()[0]).doubleValue(), closeTo(421.2d, 10d)); + assertThat(((Number) searchResponse.getHits().getAt(2).sortValues()[0]).doubleValue(), closeTo(1258.0d, 10d)); + assertThat(((Number) searchResponse.getHits().getAt(3).sortValues()[0]).doubleValue(), closeTo(5286.0d, 10d)); + assertThat(((Number) searchResponse.getHits().getAt(4).sortValues()[0]).doubleValue(), closeTo(8572.0d, 10d)); // Order: Desc searchResponse = client().prepareSearch("test").setQuery(matchAllQuery()) .addSort(SortBuilders.geoDistanceSort("locations").point(40.7143528, -74.0059731).order(SortOrder.DESC)) .execute().actionGet(); - assertHitCount(searchResponse, 4); - assertOrderedSearchHits(searchResponse, "4", "2", "3", "1"); + assertHitCount(searchResponse, 5); + assertOrderedSearchHits(searchResponse, "5", "3", "4", "2", "1"); assertThat(((Number) searchResponse.getHits().getAt(0).sortValues()[0]).doubleValue(), closeTo(8572.0d, 10d)); assertThat(((Number) searchResponse.getHits().getAt(1).sortValues()[0]).doubleValue(), closeTo(5286.0d, 10d)); assertThat(((Number) searchResponse.getHits().getAt(2).sortValues()[0]).doubleValue(), closeTo(1258.0d, 10d)); - assertThat(((Number) searchResponse.getHits().getAt(3).sortValues()[0]).doubleValue(), closeTo(0d, 10d)); + assertThat(((Number) searchResponse.getHits().getAt(3).sortValues()[0]).doubleValue(), closeTo(421.2d, 10d)); + assertThat(((Number) searchResponse.getHits().getAt(4).sortValues()[0]).doubleValue(), closeTo(0d, 10d)); // Order: Desc, Mode: min searchResponse = client().prepareSearch("test").setQuery(matchAllQuery()) .addSort(SortBuilders.geoDistanceSort("locations").point(40.7143528, -74.0059731).order(SortOrder.DESC).sortMode("min")) .execute().actionGet(); - assertHitCount(searchResponse, 4); - assertOrderedSearchHits(searchResponse, "4", "3", "2", "1"); + assertHitCount(searchResponse, 5); + assertOrderedSearchHits(searchResponse, "5", "4", "3", "2", "1"); assertThat(((Number) searchResponse.getHits().getAt(0).sortValues()[0]).doubleValue(), closeTo(2029.0d, 10d)); assertThat(((Number) searchResponse.getHits().getAt(1).sortValues()[0]).doubleValue(), closeTo(1055.0d, 10d)); assertThat(((Number) searchResponse.getHits().getAt(2).sortValues()[0]).doubleValue(), closeTo(462.1d, 10d)); - assertThat(((Number) searchResponse.getHits().getAt(3).sortValues()[0]).doubleValue(), closeTo(0d, 10d)); + assertThat(((Number) searchResponse.getHits().getAt(3).sortValues()[0]).doubleValue(), closeTo(421.2d, 10d)); + assertThat(((Number) searchResponse.getHits().getAt(4).sortValues()[0]).doubleValue(), closeTo(0d, 10d)); searchResponse = client().prepareSearch("test").setQuery(matchAllQuery()) .addSort(SortBuilders.geoDistanceSort("locations").point(40.7143528, -74.0059731).sortMode("avg").order(SortOrder.ASC)) .execute().actionGet(); - assertHitCount(searchResponse, 4); - assertOrderedSearchHits(searchResponse, "1", "3", "2", "4"); + assertHitCount(searchResponse, 5); + assertOrderedSearchHits(searchResponse, "1", "2", "4", "3", "5"); assertThat(((Number) searchResponse.getHits().getAt(0).sortValues()[0]).doubleValue(), closeTo(0d, 10d)); - assertThat(((Number) searchResponse.getHits().getAt(1).sortValues()[0]).doubleValue(), closeTo(1157d, 10d)); - assertThat(((Number) searchResponse.getHits().getAt(2).sortValues()[0]).doubleValue(), closeTo(2874d, 10d)); - assertThat(((Number) searchResponse.getHits().getAt(3).sortValues()[0]).doubleValue(), closeTo(5301d, 10d)); + assertThat(((Number) searchResponse.getHits().getAt(1).sortValues()[0]).doubleValue(), closeTo(421.2d, 10d)); + assertThat(((Number) searchResponse.getHits().getAt(2).sortValues()[0]).doubleValue(), closeTo(1157d, 10d)); + assertThat(((Number) searchResponse.getHits().getAt(3).sortValues()[0]).doubleValue(), closeTo(2874d, 10d)); + assertThat(((Number) searchResponse.getHits().getAt(4).sortValues()[0]).doubleValue(), closeTo(5301d, 10d)); searchResponse = client().prepareSearch("test").setQuery(matchAllQuery()) .addSort(SortBuilders.geoDistanceSort("locations").point(40.7143528, -74.0059731).sortMode("avg").order(SortOrder.DESC)) .execute().actionGet(); - assertHitCount(searchResponse, 4); - assertOrderedSearchHits(searchResponse, "4", "2", "3", "1"); + assertHitCount(searchResponse, 5); + assertOrderedSearchHits(searchResponse, "5", "3", "4", "2", "1"); assertThat(((Number) searchResponse.getHits().getAt(0).sortValues()[0]).doubleValue(), closeTo(5301.0d, 10d)); assertThat(((Number) searchResponse.getHits().getAt(1).sortValues()[0]).doubleValue(), closeTo(2874.0d, 10d)); assertThat(((Number) searchResponse.getHits().getAt(2).sortValues()[0]).doubleValue(), closeTo(1157.0d, 10d)); - assertThat(((Number) searchResponse.getHits().getAt(3).sortValues()[0]).doubleValue(), closeTo(0d, 10d)); + assertThat(((Number) searchResponse.getHits().getAt(3).sortValues()[0]).doubleValue(), closeTo(421.2d, 10d)); + assertThat(((Number) searchResponse.getHits().getAt(4).sortValues()[0]).doubleValue(), closeTo(0d, 10d)); assertFailures(client().prepareSearch("test").setQuery(matchAllQuery()) .addSort(SortBuilders.geoDistanceSort("locations").point(40.7143528, -74.0059731).sortMode("sum")), diff --git a/core/src/test/java/org/elasticsearch/search/internal/InternalSearchHitTests.java b/core/src/test/java/org/elasticsearch/search/internal/InternalSearchHitTests.java new file mode 100644 index 00000000000..cc631d5df2a --- /dev/null +++ b/core/src/test/java/org/elasticsearch/search/internal/InternalSearchHitTests.java @@ -0,0 +1,81 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.search.internal; + +import org.elasticsearch.common.io.stream.BytesStreamOutput; +import org.elasticsearch.common.io.stream.InputStreamStreamInput; +import org.elasticsearch.common.text.StringText; +import org.elasticsearch.search.SearchShardTarget; +import org.elasticsearch.test.ESTestCase; + +import java.io.ByteArrayInputStream; +import java.io.InputStream; +import java.util.HashMap; +import java.util.Map; + +import static org.hamcrest.Matchers.equalTo; +import static org.hamcrest.Matchers.nullValue; + +public class InternalSearchHitTests extends ESTestCase { + + public void testSerializeShardTarget() throws Exception { + SearchShardTarget target = new SearchShardTarget("_node_id", "_index", 0); + + Map innerHits = new HashMap<>(); + InternalSearchHit innerHit1 = new InternalSearchHit(0, "_id", new StringText("_type"), null); + innerHit1.shardTarget(target); + InternalSearchHit innerInnerHit2 = new InternalSearchHit(0, "_id", new StringText("_type"), null); + innerInnerHit2.shardTarget(target); + innerHits.put("1", new InternalSearchHits(new InternalSearchHit[]{innerInnerHit2}, 1, 1f)); + innerHit1.setInnerHits(innerHits); + InternalSearchHit innerHit2 = new InternalSearchHit(0, "_id", new StringText("_type"), null); + innerHit2.shardTarget(target); + InternalSearchHit innerHit3 = new InternalSearchHit(0, "_id", new StringText("_type"), null); + innerHit3.shardTarget(target); + + innerHits = new HashMap<>(); + InternalSearchHit hit1 = new InternalSearchHit(0, "_id", new StringText("_type"), null); + innerHits.put("1", new InternalSearchHits(new InternalSearchHit[]{innerHit1, innerHit2}, 1, 1f)); + innerHits.put("2", new InternalSearchHits(new InternalSearchHit[]{innerHit3}, 1, 1f)); + hit1.shardTarget(target); + hit1.setInnerHits(innerHits); + + InternalSearchHit hit2 = new InternalSearchHit(0, "_id", new StringText("_type"), null); + hit2.shardTarget(target); + + InternalSearchHits hits = new InternalSearchHits(new InternalSearchHit[]{hit1, hit2}, 2, 1f); + + InternalSearchHits.StreamContext context = new InternalSearchHits.StreamContext(); + context.streamShardTarget(InternalSearchHits.StreamContext.ShardTargetType.STREAM); + BytesStreamOutput output = new BytesStreamOutput(); + hits.writeTo(output, context); + InputStream input = new ByteArrayInputStream(output.bytes().toBytes()); + context = new InternalSearchHits.StreamContext(); + context.streamShardTarget(InternalSearchHits.StreamContext.ShardTargetType.STREAM); + InternalSearchHits results = InternalSearchHits.readSearchHits(new InputStreamStreamInput(input), context); + assertThat(results.getAt(0).shard(), equalTo(target)); + assertThat(results.getAt(0).getInnerHits().get("1").getAt(0).shard(), nullValue()); + assertThat(results.getAt(0).getInnerHits().get("1").getAt(0).getInnerHits().get("1").getAt(0).shard(), nullValue()); + assertThat(results.getAt(0).getInnerHits().get("1").getAt(1).shard(), nullValue()); + assertThat(results.getAt(0).getInnerHits().get("2").getAt(0).shard(), nullValue()); + assertThat(results.getAt(1).shard(), equalTo(target)); + } + +} diff --git a/core/src/test/java/org/elasticsearch/search/query/SearchQueryIT.java b/core/src/test/java/org/elasticsearch/search/query/SearchQueryIT.java index bca495bf31a..07363a4f240 100644 --- a/core/src/test/java/org/elasticsearch/search/query/SearchQueryIT.java +++ b/core/src/test/java/org/elasticsearch/search/query/SearchQueryIT.java @@ -1165,7 +1165,7 @@ public class SearchQueryIT extends ESIntegTestCase { } @Test - public void testFieldDatatermsQuery() throws Exception { + public void testTermsQuery() throws Exception { assertAcked(prepareCreate("test").addMapping("type", "str", "type=string", "lng", "type=long", "dbl", "type=double")); indexRandom(true, @@ -1175,60 +1175,60 @@ public class SearchQueryIT extends ESIntegTestCase { client().prepareIndex("test", "type", "4").setSource("str", "4", "lng", 4l, "dbl", 4.0d)); SearchResponse searchResponse = client().prepareSearch("test") - .setQuery(filteredQuery(matchAllQuery(), termsQuery("str", "1", "4").execution("fielddata"))).get(); + .setQuery(filteredQuery(matchAllQuery(), termsQuery("str", "1", "4"))).get(); assertHitCount(searchResponse, 2l); assertSearchHits(searchResponse, "1", "4"); searchResponse = client().prepareSearch("test") - .setQuery(filteredQuery(matchAllQuery(), termsQuery("lng", new long[] {2, 3}).execution("fielddata"))).get(); + .setQuery(filteredQuery(matchAllQuery(), termsQuery("lng", new long[] {2, 3}))).get(); assertHitCount(searchResponse, 2l); assertSearchHits(searchResponse, "2", "3"); searchResponse = client().prepareSearch("test") - .setQuery(filteredQuery(matchAllQuery(), termsQuery("dbl", new double[]{2, 3}).execution("fielddata"))).get(); + .setQuery(filteredQuery(matchAllQuery(), termsQuery("dbl", new double[]{2, 3}))).get(); assertHitCount(searchResponse, 2l); assertSearchHits(searchResponse, "2", "3"); searchResponse = client().prepareSearch("test") - .setQuery(filteredQuery(matchAllQuery(), termsQuery("lng", new int[] {1, 3}).execution("fielddata"))).get(); + .setQuery(filteredQuery(matchAllQuery(), termsQuery("lng", new int[] {1, 3}))).get(); assertHitCount(searchResponse, 2l); assertSearchHits(searchResponse, "1", "3"); searchResponse = client().prepareSearch("test") - .setQuery(filteredQuery(matchAllQuery(), termsQuery("dbl", new float[] {2, 4}).execution("fielddata"))).get(); + .setQuery(filteredQuery(matchAllQuery(), termsQuery("dbl", new float[] {2, 4}))).get(); assertHitCount(searchResponse, 2l); assertSearchHits(searchResponse, "2", "4"); // test partial matching searchResponse = client().prepareSearch("test") - .setQuery(filteredQuery(matchAllQuery(), termsQuery("str", "2", "5").execution("fielddata"))).get(); + .setQuery(filteredQuery(matchAllQuery(), termsQuery("str", "2", "5"))).get(); assertNoFailures(searchResponse); assertHitCount(searchResponse, 1l); assertFirstHit(searchResponse, hasId("2")); searchResponse = client().prepareSearch("test") - .setQuery(filteredQuery(matchAllQuery(), termsQuery("dbl", new double[] {2, 5}).execution("fielddata"))).get(); + .setQuery(filteredQuery(matchAllQuery(), termsQuery("dbl", new double[] {2, 5}))).get(); assertNoFailures(searchResponse); assertHitCount(searchResponse, 1l); assertFirstHit(searchResponse, hasId("2")); searchResponse = client().prepareSearch("test") - .setQuery(filteredQuery(matchAllQuery(), termsQuery("lng", new long[] {2, 5}).execution("fielddata"))).get(); + .setQuery(filteredQuery(matchAllQuery(), termsQuery("lng", new long[] {2, 5}))).get(); assertNoFailures(searchResponse); assertHitCount(searchResponse, 1l); assertFirstHit(searchResponse, hasId("2")); // test valid type, but no matching terms searchResponse = client().prepareSearch("test") - .setQuery(filteredQuery(matchAllQuery(), termsQuery("str", "5", "6").execution("fielddata"))).get(); + .setQuery(filteredQuery(matchAllQuery(), termsQuery("str", "5", "6"))).get(); assertHitCount(searchResponse, 0l); searchResponse = client().prepareSearch("test") - .setQuery(filteredQuery(matchAllQuery(), termsQuery("dbl", new double[] {5, 6}).execution("fielddata"))).get(); + .setQuery(filteredQuery(matchAllQuery(), termsQuery("dbl", new double[] {5, 6}))).get(); assertHitCount(searchResponse, 0l); searchResponse = client().prepareSearch("test") - .setQuery(filteredQuery(matchAllQuery(), termsQuery("lng", new long[] {5, 6}).execution("fielddata"))).get(); + .setQuery(filteredQuery(matchAllQuery(), termsQuery("lng", new long[] {5, 6}))).get(); assertHitCount(searchResponse, 0l); } diff --git a/core/src/test/java/org/elasticsearch/snapshots/AbstractSnapshotIntegTestCase.java b/core/src/test/java/org/elasticsearch/snapshots/AbstractSnapshotIntegTestCase.java index 2544f14fa3d..4e30e3ca770 100644 --- a/core/src/test/java/org/elasticsearch/snapshots/AbstractSnapshotIntegTestCase.java +++ b/core/src/test/java/org/elasticsearch/snapshots/AbstractSnapshotIntegTestCase.java @@ -28,6 +28,7 @@ import org.elasticsearch.cluster.ClusterStateListener; import org.elasticsearch.cluster.ClusterStateUpdateTask; import org.elasticsearch.cluster.SnapshotsInProgress; import org.elasticsearch.cluster.metadata.SnapshotId; +import org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider; import org.elasticsearch.cluster.service.PendingClusterTask; import org.elasticsearch.common.Priority; import org.elasticsearch.common.settings.Settings; @@ -56,6 +57,9 @@ public abstract class AbstractSnapshotIntegTestCase extends ESIntegTestCase { @Override protected Settings nodeSettings(int nodeOrdinal) { return settingsBuilder().put(super.nodeSettings(nodeOrdinal)) + // Rebalancing is causing some checks after restore to randomly fail + // due to https://github.com/elastic/elasticsearch/issues/9421 + .put(EnableAllocationDecider.CLUSTER_ROUTING_REBALANCE_ENABLE, EnableAllocationDecider.Rebalance.NONE) .extendArray("plugin.types", MockRepository.Plugin.class.getName()).build(); } diff --git a/core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java b/core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java index d89eea7206a..7e282305c89 100644 --- a/core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java +++ b/core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java @@ -307,7 +307,7 @@ public class SharedClusterSnapshotRestoreIT extends AbstractSnapshotIntegTestCas logger.info("--> create index with foo type"); assertAcked(prepareCreate("test-idx", 2, Settings.builder() - .put(indexSettings()).put(SETTING_NUMBER_OF_REPLICAS, between(0, 1)).put("refresh_interval", 10, TimeUnit.SECONDS))); + .put(indexSettings()).put(SETTING_NUMBER_OF_REPLICAS, between(0, 1)).put("refresh_interval", 10, TimeUnit.SECONDS))); NumShards numShards = getNumShards("test-idx"); @@ -322,7 +322,7 @@ public class SharedClusterSnapshotRestoreIT extends AbstractSnapshotIntegTestCas logger.info("--> delete the index and recreate it with bar type"); cluster().wipeIndices("test-idx"); assertAcked(prepareCreate("test-idx", 2, Settings.builder() - .put(SETTING_NUMBER_OF_SHARDS, numShards.numPrimaries).put(SETTING_NUMBER_OF_REPLICAS, between(0, 1)).put("refresh_interval", 5, TimeUnit.SECONDS))); + .put(SETTING_NUMBER_OF_SHARDS, numShards.numPrimaries).put(SETTING_NUMBER_OF_REPLICAS, between(0, 1)).put("refresh_interval", 5, TimeUnit.SECONDS))); assertAcked(client().admin().indices().preparePutMapping("test-idx").setType("bar").setSource("baz", "type=string")); ensureGreen(); @@ -995,7 +995,6 @@ public class SharedClusterSnapshotRestoreIT extends AbstractSnapshotIntegTestCas } @Test - @AwaitsFix(bugUrl = "https://github.com/elastic/elasticsearch/issues/12855") public void renameOnRestoreTest() throws Exception { Client client = client(); diff --git a/core/src/test/java/org/elasticsearch/stresstest/leaks/GenericStatsLeak.java b/core/src/test/java/org/elasticsearch/stresstest/leaks/GenericStatsLeak.java index 03d0ca76d04..fc0d5bc5253 100644 --- a/core/src/test/java/org/elasticsearch/stresstest/leaks/GenericStatsLeak.java +++ b/core/src/test/java/org/elasticsearch/stresstest/leaks/GenericStatsLeak.java @@ -32,7 +32,6 @@ public class GenericStatsLeak { Node node = NodeBuilder.nodeBuilder().settings(Settings.settingsBuilder() .put("monitor.os.refresh_interval", 0) .put("monitor.process.refresh_interval", 0) - .put("monitor.network.refresh_interval", 0) ).node(); JvmService jvmService = node.injector().getInstance(JvmService.class); diff --git a/core/src/test/java/org/elasticsearch/test/InternalTestCluster.java b/core/src/test/java/org/elasticsearch/test/InternalTestCluster.java index 2a17f9eb06d..edf133e3fe7 100644 --- a/core/src/test/java/org/elasticsearch/test/InternalTestCluster.java +++ b/core/src/test/java/org/elasticsearch/test/InternalTestCluster.java @@ -94,7 +94,6 @@ import org.elasticsearch.node.NodeMocksPlugin; import org.elasticsearch.node.internal.InternalSettingsPreparer; import org.elasticsearch.node.service.NodeService; import org.elasticsearch.script.ScriptService; -import org.elasticsearch.search.SearchModule; import org.elasticsearch.search.SearchService; import org.elasticsearch.test.disruption.ServiceDisruptionScheme; import org.elasticsearch.search.MockSearchService; @@ -109,6 +108,7 @@ import org.junit.Assert; import java.io.Closeable; import java.io.IOException; +import java.net.InetAddress; import java.net.InetSocketAddress; import java.nio.file.Path; import java.util.ArrayList; @@ -505,7 +505,6 @@ public final class InternalTestCluster extends TestCluster { public static String clusterName(String prefix, long clusterSeed) { StringBuilder builder = new StringBuilder(prefix); final int childVM = RandomizedTest.systemPropertyAsInt(SysGlobals.CHILDVM_SYSPROP_JVM_ID, 0); - builder.append('-').append(NetworkUtils.getLocalHostName("__default_host__")); builder.append("-CHILD_VM=[").append(childVM).append(']'); builder.append("-CLUSTER_SEED=[").append(clusterSeed).append(']'); // if multiple maven task run on a single host we better have an identifier that doesn't rely on input params @@ -1865,7 +1864,7 @@ public final class InternalTestCluster extends TestCluster { } NodeService nodeService = getInstanceFromNode(NodeService.class, nodeAndClient.node); - NodeStats stats = nodeService.stats(CommonStatsFlags.ALL, false, false, false, false, false, false, false, false, false, false); + NodeStats stats = nodeService.stats(CommonStatsFlags.ALL, false, false, false, false, false, false, false, false, false); assertThat("Fielddata size must be 0 on node: " + stats.getNode(), stats.getIndices().getFieldData().getMemorySizeInBytes(), equalTo(0l)); assertThat("Query cache size must be 0 on node: " + stats.getNode(), stats.getIndices().getQueryCache().getMemorySizeInBytes(), equalTo(0l)); assertThat("FixedBitSet cache size must be 0 on node: " + stats.getNode(), stats.getIndices().getSegments().getBitsetMemoryInBytes(), equalTo(0l)); diff --git a/core/src/test/java/org/elasticsearch/test/junit/listeners/ReproduceInfoPrinter.java b/core/src/test/java/org/elasticsearch/test/junit/listeners/ReproduceInfoPrinter.java index a824ec7def9..07f09cc2386 100644 --- a/core/src/test/java/org/elasticsearch/test/junit/listeners/ReproduceInfoPrinter.java +++ b/core/src/test/java/org/elasticsearch/test/junit/listeners/ReproduceInfoPrinter.java @@ -156,8 +156,8 @@ public class ReproduceInfoPrinter extends RunListener { public ReproduceErrorMessageBuilder appendESProperties() { appendProperties("es.logger.level"); - if (!inVerifyPhase()) { - // these properties only make sense for unit tests + if (inVerifyPhase()) { + // these properties only make sense for integration tests appendProperties("es.node.mode", "es.node.local", TESTS_CLUSTER, InternalTestCluster.TESTS_ENABLE_MOCK_MODULES); } appendProperties("tests.assertion.disabled", "tests.security.manager", "tests.nightly", "tests.jvms", diff --git a/core/src/test/java/org/elasticsearch/transport/netty/NettyTransportMultiPortTests.java b/core/src/test/java/org/elasticsearch/transport/netty/NettyTransportMultiPortTests.java index 1c4cac7078e..1a494de4931 100644 --- a/core/src/test/java/org/elasticsearch/transport/netty/NettyTransportMultiPortTests.java +++ b/core/src/test/java/org/elasticsearch/transport/netty/NettyTransportMultiPortTests.java @@ -135,29 +135,6 @@ public class NettyTransportMultiPortTests extends ESTestCase { } } - @Test - public void testThatBindingOnDifferentHostsWorks() throws Exception { - int[] ports = getRandomPorts(2); - InetAddress firstNonLoopbackAddress = NetworkUtils.getFirstNonLoopbackAddress(NetworkUtils.StackType.IPv4); - assumeTrue("No IP-v4 non-loopback address available - are you on a plane?", firstNonLoopbackAddress != null); - Settings settings = settingsBuilder() - .put("network.host", "127.0.0.1") - .put("transport.tcp.port", ports[0]) - .put("transport.profiles.default.bind_host", "127.0.0.1") - .put("transport.profiles.client1.bind_host", firstNonLoopbackAddress.getHostAddress()) - .put("transport.profiles.client1.port", ports[1]) - .build(); - - ThreadPool threadPool = new ThreadPool("tst"); - try (NettyTransport ignored = startNettyTransport(settings, threadPool)) { - assertPortIsBound("127.0.0.1", ports[0]); - assertPortIsBound(firstNonLoopbackAddress.getHostAddress(), ports[1]); - assertConnectionRefused(ports[1]); - } finally { - terminate(threadPool); - } - } - @Test public void testThatProfileWithoutValidNameIsIgnored() throws Exception { int[] ports = getRandomPorts(3); diff --git a/core/src/test/java/org/elasticsearch/search/geo/gzippedmap.gz b/core/src/test/resources/org/elasticsearch/search/geo/gzippedmap.gz similarity index 100% rename from core/src/test/java/org/elasticsearch/search/geo/gzippedmap.gz rename to core/src/test/resources/org/elasticsearch/search/geo/gzippedmap.gz diff --git a/dev-tools/build_repositories.sh b/dev-tools/build_repositories.sh index d00f6c2e2ad..eaf982dc4cb 100755 --- a/dev-tools/build_repositories.sh +++ b/dev-tools/build_repositories.sh @@ -158,7 +158,7 @@ mkdir -p $centosdir echo "RPM: Syncing repository for version $version into $centosdir" $s3cmd sync s3://$S3_BUCKET_SYNC_FROM/elasticsearch/$version/centos/ $centosdir -rpm=distribution/rpm/target/releases/signed/elasticsearch*.rpm +rpm=distribution/rpm/target/releases/elasticsearch*.rpm echo "RPM: Copying signed $rpm into $centosdir" cp $rpm $centosdir diff --git a/dev-tools/pom.xml b/dev-tools/pom.xml index b53889ee731..760f13579a0 100644 --- a/dev-tools/pom.xml +++ b/dev-tools/pom.xml @@ -1,9 +1,9 @@ diff --git a/dev-tools/prepare_release_create_release_version.py b/dev-tools/prepare_release_create_release_version.py index 07446e58e01..53da1ae39c6 100644 --- a/dev-tools/prepare_release_create_release_version.py +++ b/dev-tools/prepare_release_create_release_version.py @@ -84,17 +84,6 @@ def process_file(file_path, line_callback): os.remove(abs_path) return False -# Moves the pom.xml file from a snapshot to a release -def remove_maven_snapshot(poms, release): - for pom in poms: - if pom: - #print('Replacing SNAPSHOT version in file %s' % (pom)) - pattern = ' 4.0.0 org.elasticsearch -elasticsearch-dev-tools +dev-tools 2.1.0-SNAPSHOT -Elasticsearch Build Resources +Build Tools and Resources Tools to assist in building and developing in the Elasticsearch project org.sonatype.oss @@ -35,13 +35,6 @@ -- - -org.springframework.build -aws-maven -5.0.0.RELEASE -@@ -70,13 +63,6 @@ - - - -aws-release -AWS Release Repository -${elasticsearch.s3.repository} -%s-SNAPSHOT ' % (release) - replacement = '%s ' % (release) - def callback(line): - return line.replace(pattern, replacement) - process_file(pom, callback) - # Moves the Version.java file from a snapshot to a release def remove_version_snapshot(version_file, release): # 1.0.0.Beta1 -> 1_0_0_Beta1 @@ -108,11 +97,6 @@ def remove_version_snapshot(version_file, release): if not processed: raise RuntimeError('failed to remove snapshot version for %s' % (release)) -# finds all the pom files that do have a -SNAPSHOT version -def find_pom_files_with_snapshots(): - files = subprocess.check_output('find . -name pom.xml -exec grep -l ".*-SNAPSHOT " {} ";"', shell=True) - return files.decode('utf-8').split('\n') - # Checks the pom.xml for the release version. # This method fails if the pom file has no SNAPSHOT version set ie. # if the version is already on a release version we fail. @@ -132,14 +116,29 @@ if __name__ == "__main__": print('*** Preparing release version: [%s]' % release_version) ensure_checkout_is_clean() - pom_files = find_pom_files_with_snapshots() - remove_maven_snapshot(pom_files, release_version) + run('cd dev-tools && mvn versions:set -DnewVersion=%s -DgenerateBackupPoms=false' % (release_version)) + run('cd rest-api-spec && mvn versions:set -DnewVersion=%s -DgenerateBackupPoms=false' % (release_version)) + run('mvn versions:set -DnewVersion=%s -DgenerateBackupPoms=false' % (release_version)) + remove_version_snapshot(VERSION_FILE, release_version) print('*** Done removing snapshot version. DO NOT COMMIT THIS, WHEN CREATING A RELEASE CANDIDATE.') - shortHash = subprocess.check_output('git log --pretty=format:"%h" -n 1', shell=True) + shortHash = subprocess.check_output('git log --pretty=format:"%h" -n 1', shell=True).decode('utf-8') + localRepo = '/tmp/elasticsearch-%s-%s' % (release_version, shortHash) + localRepoElasticsearch = localRepo + '/org/elasticsearch' print('') print('*** To create a release candidate run: ') - print(' mvn clean deploy -Prelease -DskipTests -Dgpg.keyname="$GPG_KEY_ID" -Dgpg.passphrase="$GPG_PASSPHRASE" -Dpackaging.rpm.rpmbuild=/usr/bin/rpmbuild -Delasticsearch.s3.repository=s3://download.elasticsearch.org/elasticsearch/staging/%s' % (shortHash.decode('utf-8'))) + print(' mvn clean install deploy -Prelease -DskipTests -Dgpg.keyname="D88E42B4" -Dpackaging.rpm.rpmbuild=/usr/bin/rpmbuild -Drpm.sign=true -Dmaven.repo.local=%s -Dno.commit.pattern="\\bno(n|)commit\\b" -Dforbidden.test.signatures=""' % (localRepo)) + print(' 1. Remove all _remote.repositories: find %s -name _remote.repositories -exec rm {} \;' % (localRepoElasticsearch)) + print(' 2. Rename all maven metadata files: for i in $(find %s -name "maven-metadata-local.xml*") ; do mv "$i" "${i/-local/}" ; done' % (localRepoElasticsearch)) + print(' 3. Sync %s into S3 bucket' % (localRepoElasticsearch)) + print (' s3cmd sync %s s3://download.elasticsearch.org/elasticsearch/staging/elasticsearch-%s-%s/maven/org/' % (localRepoElasticsearch, release_version, shortHash)) + print(' 4. Create repositories: ') + print (' export S3_BUCKET_SYNC_TO="download.elasticsearch.org/elasticsearch/staging/elasticsearch-%s-%s/repos"' % (release_version, shortHash)) + print (' export S3_BUCKET_SYNC_FROM="$S3_BUCKET_SYNC_TO"') + print(' dev-tools/build_repositories.sh %s' % (release_version)) + print('') + print('NOTE: the above mvn command will promt you several times for the GPG passphrase of the key you specified you can alternatively pass it via -Dgpg.passphrase=yourPassPhrase') + print('NOTE: Running s3cmd might require you to create a config file with your credentials, if the s3cmd does not support suppliying them via the command line!') diff --git a/dev-tools/src/main/resources/ant/integration-tests.xml b/dev-tools/src/main/resources/ant/integration-tests.xml index 3da39286400..d710e57a076 100644 --- a/dev-tools/src/main/resources/ant/integration-tests.xml +++ b/dev-tools/src/main/resources/ant/integration-tests.xml @@ -124,7 +124,7 @@- @@ -138,7 +138,7 @@+ - diff --git a/dev-tools/src/main/resources/forbidden/all-signatures.txt b/dev-tools/src/main/resources/forbidden/all-signatures.txt index 642310519c8..f697b323569 100644 --- a/dev-tools/src/main/resources/forbidden/all-signatures.txt +++ b/dev-tools/src/main/resources/forbidden/all-signatures.txt @@ -18,6 +18,9 @@ java.net.URL#getPath() java.net.URL#getFile() +@defaultMessage Usage of getLocalHost is discouraged +java.net.InetAddress#getLocalHost() + @defaultMessage Use java.nio.file instead of java.io.File API java.util.jar.JarFile java.util.zip.ZipFile diff --git a/dev-tools/src/main/resources/license-check/check_license_and_sha.pl b/dev-tools/src/main/resources/license-check/check_license_and_sha.pl index 9263244dd2b..4d9d5ba06b8 100755 --- a/dev-tools/src/main/resources/license-check/check_license_and_sha.pl +++ b/dev-tools/src/main/resources/license-check/check_license_and_sha.pl @@ -30,6 +30,7 @@ $Source = File::Spec->rel2abs($Source); say "LICENSE DIR: $License_Dir"; say "SOURCE: $Source"; +say "IGNORE: $Ignore"; die "License dir is not a directory: $License_Dir\n" . usage() unless -d $License_Dir; diff --git a/distribution/deb/pom.xml b/distribution/deb/pom.xml index 4f1a4b0f95b..a86fad9513d 100644 --- a/distribution/deb/pom.xml +++ b/distribution/deb/pom.xml @@ -5,13 +5,13 @@+ 4.0.0 org.elasticsearch.distribution -elasticsearch-distribution +distributions 2.1.0-SNAPSHOT org.elasticsearch.distribution.deb elasticsearch -Elasticsearch DEB Distribution +Distribution: Deb @@ -153,7 +153,7 @@1 - diff --git a/distribution/rpm/pom.xml b/distribution/rpm/pom.xml index cd8f321689e..488ed97ac04 100644 --- a/distribution/rpm/pom.xml +++ b/distribution/rpm/pom.xml @@ -5,13 +5,13 @@127.0.0.1:${integ.transport.port} +localhost:${integ.transport.port} 4.0.0 org.elasticsearch.distribution -elasticsearch-distribution +distributions 2.1.0-SNAPSHOT org.elasticsearch.distribution.rpm elasticsearch -Elasticsearch RPM Distribution +Distribution: RPM rpm The RPM distribution of Elasticsearch diff --git a/distribution/shaded/pom.xml b/distribution/shaded/pom.xml index a0624745e03..6a4b54f7b18 100644 --- a/distribution/shaded/pom.xml +++ b/distribution/shaded/pom.xml @@ -5,13 +5,13 @@4.0.0 org.elasticsearch.distribution -elasticsearch-distribution +distributions 2.1.0-SNAPSHOT org.elasticsearch.distribution.shaded elasticsearch -Elasticsearch Shaded Distribution +Distribution: Shaded JAR diff --git a/distribution/src/main/packaging/systemd/elasticsearch.service b/distribution/src/main/packaging/systemd/elasticsearch.service index 6791992df0c..cdcad9d93dd 100644 --- a/distribution/src/main/packaging/systemd/elasticsearch.service +++ b/distribution/src/main/packaging/systemd/elasticsearch.service @@ -19,12 +19,12 @@ User=${packaging.elasticsearch.user} Group=${packaging.elasticsearch.group} ExecStart=${packaging.elasticsearch.bin.dir}/elasticsearch \ - -Des.pidfile=$PID_DIR/elasticsearch.pid \ - -Des.default.path.home=$ES_HOME \ - -Des.default.path.logs=$LOG_DIR \ - -Des.default.path.data=$DATA_DIR \ - -Des.default.config=$CONF_FILE \ - -Des.default.path.conf=$CONF_DIR + -Des.pidfile=${PID_DIR}/elasticsearch.pid \ + -Des.default.path.home=${ES_HOME} \ + -Des.default.path.logs=${LOG_DIR} \ + -Des.default.path.data=${DATA_DIR} \ + -Des.default.config=${CONF_FILE} \ + -Des.default.path.conf=${CONF_DIR} # Connects standard output to /dev/null StandardOutput=null diff --git a/distribution/src/main/resources/bin/elasticsearch b/distribution/src/main/resources/bin/elasticsearch index f35e2d29a1e..2d831485a33 100755 --- a/distribution/src/main/resources/bin/elasticsearch +++ b/distribution/src/main/resources/bin/elasticsearch @@ -47,7 +47,7 @@ # hasn't been done, we assume that this is not a packaged version and the # user has forgotten to run Maven to create a package. IS_PACKAGED_VERSION='${project.parent.artifactId}' -if [ "$IS_PACKAGED_VERSION" != "elasticsearch-distribution" ]; then +if [ "$IS_PACKAGED_VERSION" != "distributions" ]; then cat >&2 << EOF Error: You must build the project with Maven or download a pre-built package before you can run Elasticsearch. See 'Building from Source' in README.textile diff --git a/distribution/tar/pom.xml b/distribution/tar/pom.xml index d84450b1d22..33181b281ab 100644 --- a/distribution/tar/pom.xml +++ b/distribution/tar/pom.xml @@ -5,13 +5,13 @@ 4.0.0 org.elasticsearch.distribution -elasticsearch-distribution +distributions 2.1.0-SNAPSHOT org.elasticsearch.distribution.tar elasticsearch -Elasticsearch TAR Distribution +Distribution: TAR - org.elasticsearch.plugin -elasticsearch-plugin +plugins 2.1.0-SNAPSHOT elasticsearch-cloud-azure -Elasticsearch Azure cloud plugin +cloud-azure +Plugin: Cloud: Azure The Azure Cloud plugin allows to use Azure API for the unicast discovery mechanism and add Azure storage repositories. diff --git a/plugins/cloud-gce/pom.xml b/plugins/cloud-gce/pom.xml index 1d62fdb327e..724036aafcd 100644 --- a/plugins/cloud-gce/pom.xml +++ b/plugins/cloud-gce/pom.xml @@ -17,12 +17,12 @@ governing permissions and limitations under the License. --> - org.elasticsearch.plugin -elasticsearch-plugin +plugins 2.1.0-SNAPSHOT elasticsearch-cloud-gce -Elasticsearch Google Compute Engine cloud plugin +cloud-gce +Plugin: Cloud: Google Compute Engine The Google Compute Engine (GCE) Cloud plugin allows to use GCE API for the unicast discovery mechanism. diff --git a/plugins/delete-by-query/pom.xml b/plugins/delete-by-query/pom.xml index d7ea468088e..07d3bb8fbe8 100644 --- a/plugins/delete-by-query/pom.xml +++ b/plugins/delete-by-query/pom.xml @@ -17,12 +17,12 @@ governing permissions and limitations under the License. --> - org.elasticsearch.plugin -elasticsearch-plugin +plugins 2.1.0-SNAPSHOT elasticsearch-delete-by-query -Elasticsearch Delete By Query plugin +delete-by-query +Plugin: Delete By Query The Delete By Query plugin allows to delete documents in Elasticsearch with a single query. diff --git a/plugins/jvm-example/pom.xml b/plugins/jvm-example/pom.xml index 0bfd84b82b9..be67cf54eb1 100644 --- a/plugins/jvm-example/pom.xml +++ b/plugins/jvm-example/pom.xml @@ -6,12 +6,12 @@ - org.elasticsearch.plugin -elasticsearch-plugin +plugins 2.1.0-SNAPSHOT elasticsearch-jvm-example -Elasticsearch example JVM plugin +jvm-example +Plugin: JVM example Demonstrates all the pluggable Java entry points in Elasticsearch diff --git a/plugins/lang-javascript/pom.xml b/plugins/lang-javascript/pom.xml index f283aa73989..eaaa29a34b7 100644 --- a/plugins/lang-javascript/pom.xml +++ b/plugins/lang-javascript/pom.xml @@ -6,12 +6,12 @@ - org.elasticsearch.plugin -elasticsearch-plugin +plugins 2.1.0-SNAPSHOT elasticsearch-lang-javascript -Elasticsearch JavaScript language plugin +lang-javascript +Plugin: Language: JavaScript The JavaScript language plugin allows to have javascript as the language of scripts to execute. diff --git a/plugins/lang-python/pom.xml b/plugins/lang-python/pom.xml index 704aff5379c..7b44f61889b 100644 --- a/plugins/lang-python/pom.xml +++ b/plugins/lang-python/pom.xml @@ -6,12 +6,12 @@ - org.elasticsearch.plugin -elasticsearch-plugin +plugins 2.1.0-SNAPSHOT elasticsearch-lang-python -Elasticsearch Python language plugin +lang-python +Plugin: Language: Python The Python language plugin allows to have python as the language of scripts to execute. diff --git a/plugins/mapper-murmur3/licenses/no_deps.txt b/plugins/mapper-murmur3/licenses/no_deps.txt new file mode 100644 index 00000000000..8cce254d037 --- /dev/null +++ b/plugins/mapper-murmur3/licenses/no_deps.txt @@ -0,0 +1 @@ +This plugin has no third party dependencies diff --git a/plugins/mapper-murmur3/pom.xml b/plugins/mapper-murmur3/pom.xml new file mode 100644 index 00000000000..9c7440d9e72 --- /dev/null +++ b/plugins/mapper-murmur3/pom.xml @@ -0,0 +1,43 @@ + + + + + diff --git a/plugins/mapper-murmur3/rest-api-spec/test/mapper_murmur3/10_basic.yaml b/plugins/mapper-murmur3/rest-api-spec/test/mapper_murmur3/10_basic.yaml new file mode 100644 index 00000000000..4ed879c7192 --- /dev/null +++ b/plugins/mapper-murmur3/rest-api-spec/test/mapper_murmur3/10_basic.yaml @@ -0,0 +1,65 @@ +# Integration tests for Mapper Murmur3 components +# + +--- +"Mapper Murmur3": + + - do: + indices.create: + index: test + body: + mappings: + type1: { "properties": { "foo": { "type": "string", "fields": { "hash": { "type": "murmur3" } } } } } + + - do: + index: + index: test + type: type1 + id: 0 + body: { "foo": null } + + - do: + indices.refresh: {} + + - do: + search: + body: { "aggs": { "foo_count": { "cardinality": { "field": "foo.hash" } } } } + + - match: { aggregations.foo_count.value: 0 } + + - do: + index: + index: test + type: type1 + id: 1 + body: { "foo": "bar" } + + - do: + index: + index: test + type: type1 + id: 2 + body: { "foo": "baz" } + + - do: + index: + index: test + type: type1 + id: 3 + body: { "foo": "quux" } + + - do: + index: + index: test + type: type1 + id: 4 + body: { "foo": "bar" } + + - do: + indices.refresh: {} + + - do: + search: + body: { "aggs": { "foo_count": { "cardinality": { "field": "foo.hash" } } } } + + - match: { aggregations.foo_count.value: 3 } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/core/Murmur3FieldMapper.java b/plugins/mapper-murmur3/src/main/java/org/elasticsearch/index/mapper/murmur3/Murmur3FieldMapper.java similarity index 85% rename from core/src/main/java/org/elasticsearch/index/mapper/core/Murmur3FieldMapper.java rename to plugins/mapper-murmur3/src/main/java/org/elasticsearch/index/mapper/murmur3/Murmur3FieldMapper.java index 5e7b664aceb..60c31c3f765 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/core/Murmur3FieldMapper.java +++ b/plugins/mapper-murmur3/src/main/java/org/elasticsearch/index/mapper/murmur3/Murmur3FieldMapper.java @@ -17,9 +17,10 @@ * under the License. */ -package org.elasticsearch.index.mapper.core; +package org.elasticsearch.index.mapper.murmur3; import org.apache.lucene.document.Field; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.util.BytesRef; import org.elasticsearch.Version; import org.elasticsearch.common.Explicit; @@ -31,12 +32,13 @@ import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.Mapper; import org.elasticsearch.index.mapper.MapperParsingException; import org.elasticsearch.index.mapper.ParseContext; +import org.elasticsearch.index.mapper.core.LongFieldMapper; +import org.elasticsearch.index.mapper.core.NumberFieldMapper; import java.io.IOException; import java.util.List; import java.util.Map; -import static org.elasticsearch.index.mapper.MapperBuilders.murmur3Field; import static org.elasticsearch.index.mapper.core.TypeParsers.parseNumberField; public class Murmur3FieldMapper extends LongFieldMapper { @@ -45,6 +47,9 @@ public class Murmur3FieldMapper extends LongFieldMapper { public static class Defaults extends LongFieldMapper.Defaults { public static final MappedFieldType FIELD_TYPE = new Murmur3FieldType(); + static { + FIELD_TYPE.freeze(); + } } public static class Builder extends NumberFieldMapper.Builder4.0.0 + ++ + +org.elasticsearch.plugin +plugins +2.1.0-SNAPSHOT +mapper-murmur3 +Plugin: Mapper: Murmur3 +The Mapper Murmur3 plugin allows to compute hashes of a field's values at index-time and to store them in the index. + ++ + +org.elasticsearch.plugin.mapper.MapperMurmur3Plugin +mapper_murmur3 +false ++ + ++ ++ +org.apache.maven.plugins +maven-assembly-plugin +{ @@ -65,6 +70,17 @@ public class Murmur3FieldMapper extends LongFieldMapper { return fieldMapper; } + @Override + protected void setupFieldType(BuilderContext context) { + super.setupFieldType(context); + if (context.indexCreatedVersion().onOrAfter(Version.V_2_0_0_beta1)) { + fieldType.setIndexOptions(IndexOptions.NONE); + defaultFieldType.setIndexOptions(IndexOptions.NONE); + fieldType.setHasDocValues(true); + defaultFieldType.setHasDocValues(true); + } + } + @Override protected NamedAnalyzer makeNumberAnalyzer(int precisionStep) { return NumericLongAnalyzer.buildNamedAnalyzer(precisionStep); @@ -80,7 +96,7 @@ public class Murmur3FieldMapper extends LongFieldMapper { @Override @SuppressWarnings("unchecked") public Mapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { - Builder builder = murmur3Field(name); + Builder builder = new Builder(name); // tweaking these settings is no longer allowed, the entire purpose of murmur3 fields is to store a hash if (parserContext.indexVersionCreated().onOrAfter(Version.V_2_0_0_beta1)) { @@ -92,6 +108,10 @@ public class Murmur3FieldMapper extends LongFieldMapper { } } + if (parserContext.indexVersionCreated().before(Version.V_2_0_0_beta1)) { + builder.indexOptions(IndexOptions.DOCS); + } + parseNumberField(builder, name, node, parserContext); // Because this mapper extends LongFieldMapper the null_value field will be added to the JSON when transferring cluster state // between nodes so we have to remove the entry here so that the validation doesn't fail @@ -104,7 +124,8 @@ public class Murmur3FieldMapper extends LongFieldMapper { // this only exists so a check can be done to match the field type to using murmur3 hashing... public static class Murmur3FieldType extends LongFieldMapper.LongFieldType { - public Murmur3FieldType() {} + public Murmur3FieldType() { + } protected Murmur3FieldType(Murmur3FieldType ref) { super(ref); diff --git a/plugins/mapper-murmur3/src/main/java/org/elasticsearch/index/mapper/murmur3/RegisterMurmur3FieldMapper.java b/plugins/mapper-murmur3/src/main/java/org/elasticsearch/index/mapper/murmur3/RegisterMurmur3FieldMapper.java new file mode 100644 index 00000000000..5a6a71222c0 --- /dev/null +++ b/plugins/mapper-murmur3/src/main/java/org/elasticsearch/index/mapper/murmur3/RegisterMurmur3FieldMapper.java @@ -0,0 +1,36 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.mapper.murmur3; + +import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.AbstractIndexComponent; +import org.elasticsearch.index.Index; +import org.elasticsearch.index.mapper.MapperService; + +public class RegisterMurmur3FieldMapper extends AbstractIndexComponent { + + @Inject + public RegisterMurmur3FieldMapper(Index index, Settings indexSettings, MapperService mapperService) { + super(index, indexSettings); + mapperService.documentMapperParser().putTypeParser(Murmur3FieldMapper.CONTENT_TYPE, new Murmur3FieldMapper.TypeParser()); + } + +} diff --git a/plugins/mapper-murmur3/src/main/java/org/elasticsearch/plugin/mapper/MapperMurmur3IndexModule.java b/plugins/mapper-murmur3/src/main/java/org/elasticsearch/plugin/mapper/MapperMurmur3IndexModule.java new file mode 100644 index 00000000000..51054d774bd --- /dev/null +++ b/plugins/mapper-murmur3/src/main/java/org/elasticsearch/plugin/mapper/MapperMurmur3IndexModule.java @@ -0,0 +1,31 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.plugin.mapper; + +import org.elasticsearch.common.inject.AbstractModule; +import org.elasticsearch.index.mapper.murmur3.RegisterMurmur3FieldMapper; + +public class MapperMurmur3IndexModule extends AbstractModule { + + @Override + protected void configure() { + bind(RegisterMurmur3FieldMapper.class).asEagerSingleton(); + } +} diff --git a/plugins/mapper-murmur3/src/main/java/org/elasticsearch/plugin/mapper/MapperMurmur3Plugin.java b/plugins/mapper-murmur3/src/main/java/org/elasticsearch/plugin/mapper/MapperMurmur3Plugin.java new file mode 100644 index 00000000000..9b6611decde --- /dev/null +++ b/plugins/mapper-murmur3/src/main/java/org/elasticsearch/plugin/mapper/MapperMurmur3Plugin.java @@ -0,0 +1,45 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.plugin.mapper; + +import org.elasticsearch.common.inject.Module; +import org.elasticsearch.plugins.AbstractPlugin; + +import java.util.Collection; +import java.util.Collections; + +public class MapperMurmur3Plugin extends AbstractPlugin { + + @Override + public String name() { + return "mapper-murmur3"; + } + + @Override + public String description() { + return "A mapper that allows to precompute murmur3 hashes of values at index-time and store them in the index"; + } + + @Override + public Collection > indexModules() { + return Collections. >singleton(MapperMurmur3IndexModule.class); + } + +} diff --git a/plugins/mapper-murmur3/src/test/java/org/elasticsearch/index/mapper/murmur3/MapperMurmur3RestIT.java b/plugins/mapper-murmur3/src/test/java/org/elasticsearch/index/mapper/murmur3/MapperMurmur3RestIT.java new file mode 100644 index 00000000000..f440a343bc6 --- /dev/null +++ b/plugins/mapper-murmur3/src/test/java/org/elasticsearch/index/mapper/murmur3/MapperMurmur3RestIT.java @@ -0,0 +1,42 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.mapper.murmur3; + +import com.carrotsearch.randomizedtesting.annotations.Name; +import com.carrotsearch.randomizedtesting.annotations.ParametersFactory; + +import org.elasticsearch.test.rest.ESRestTestCase; +import org.elasticsearch.test.rest.RestTestCandidate; +import org.elasticsearch.test.rest.parser.RestTestParseException; + +import java.io.IOException; + +public class MapperMurmur3RestIT extends ESRestTestCase { + + public MapperMurmur3RestIT(@Name("yaml") RestTestCandidate testCandidate) { + super(testCandidate); + } + + @ParametersFactory + public static Iterable