From 68307aa9f3636bdccd1f1ca90f1d0fd640a019a5 Mon Sep 17 00:00:00 2001 From: Robert Muir Date: Mon, 17 Aug 2015 15:37:07 -0400 Subject: [PATCH 01/39] Fix network binding for ipv4/ipv6 When elasticsearch is configured by interface (or default: loopback interfaces), bind to all addresses on the interface rather than an arbitrary one. If the publish address is not specified, default it from the bound addresses based on the following sort ordering: * ipv4/ipv6 (java.net.preferIPv4Stack, defaults to true) * ordinary addresses * site-local addresses * link local addresses * loopback addresses One one address is published, and multicast is still always over ipv4: these need to be future improvements. Closes #12906 Closes #12915 Squashed commit of the following: commit 7e60833312f329a5749f9a256b9c1331a956d98f Author: Robert Muir Date: Mon Aug 17 14:45:33 2015 -0400 fix java 7 compilation oops commit c7b9f3a42058beb061b05c6dd67fd91477fd258a Author: Robert Muir Date: Mon Aug 17 14:24:16 2015 -0400 Cleanup/fix logic around custom resolvers commit bd7065f1936e14a29c9eb8fe4ecab0ce512ac08e Author: Robert Muir Date: Mon Aug 17 13:29:42 2015 -0400 Add some unit tests for utility methods commit 0faf71cb0ee9a45462d58af3d1bf214e8a79347c Author: Robert Muir Date: Mon Aug 17 12:11:48 2015 -0400 localhost all the way down commit e198bb2bc0d1673288b96e07e6e6ad842179978c Merge: b55d092 b93a75f Author: Robert Muir Date: Mon Aug 17 12:05:02 2015 -0400 Merge branch 'master' into network_cleanup commit b55d092811d7832bae579c5586e171e9cc1ebe9d Author: Robert Muir Date: Mon Aug 17 12:03:03 2015 -0400 fix docs, fix another bug in multicast (publish host = bad here!) commit 88c462eb302b30a82585f95413927a5cbb7d54c4 Author: Robert Muir Date: Mon Aug 17 11:50:49 2015 -0400 remove nocommit commit 89547d7b10d68b23d7f24362e1f4782f5e1ca03c Author: Robert Muir Date: Mon Aug 17 11:49:35 2015 -0400 fix http too commit 9b9413aca8a3f6397b5031831f910791b685e5be Author: Robert Muir Date: Mon Aug 17 11:06:02 2015 -0400 Fix transport / interface code Next up: multicast and then http --- README.textile | 34 +- core/README.textile | 34 +- .../cluster/node/DiscoveryNode.java | 4 +- .../common/network/NetworkService.java | 160 +++--- .../common/network/NetworkUtils.java | 456 +++++++----------- .../zen/ping/multicast/MulticastZenPing.java | 4 +- .../http/netty/NettyHttpServerTransport.java | 69 ++- .../transport/netty/NettyTransport.java | 52 +- .../common/network/NetworkUtilsTests.java | 77 +++ .../test/InternalTestCluster.java | 2 +- .../netty/NettyTransportMultiPortTests.java | 23 - .../main/resources/ant/integration-tests.xml | 4 +- distribution/pom.xml | 2 +- docs/reference/modules/discovery/zen.asciidoc | 2 +- docs/reference/modules/network.asciidoc | 28 +- .../cloud/aws/network/Ec2NameResolver.java | 9 +- plugins/pom.xml | 2 +- 17 files changed, 475 insertions(+), 487 deletions(-) create mode 100644 core/src/test/java/org/elasticsearch/common/network/NetworkUtilsTests.java diff --git a/README.textile b/README.textile index f48e40c2eaf..63f1841822c 100644 --- a/README.textile +++ b/README.textile @@ -42,7 +42,7 @@ h3. Installation * "Download":https://www.elastic.co/downloads/elasticsearch and unzip the Elasticsearch official distribution. * Run @bin/elasticsearch@ on unix, or @bin\elasticsearch.bat@ on windows. -* Run @curl -X GET http://127.0.0.1:9200/@. +* Run @curl -X GET http://localhost:9200/@. * Start more servers ... h3. Indexing @@ -50,16 +50,16 @@ h3. Indexing Let's try and index some twitter like information. First, let's create a twitter user, and add some tweets (the @twitter@ index will be created automatically):
-curl -XPUT 'http://127.0.0.1:9200/twitter/user/kimchy' -d '{ "name" : "Shay Banon" }'
+curl -XPUT 'http://localhost:9200/twitter/user/kimchy' -d '{ "name" : "Shay Banon" }'
 
-curl -XPUT 'http://127.0.0.1:9200/twitter/tweet/1' -d '
+curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '
 {
     "user": "kimchy",
     "postDate": "2009-11-15T13:12:00",
     "message": "Trying out Elasticsearch, so far so good?"
 }'
 
-curl -XPUT 'http://127.0.0.1:9200/twitter/tweet/2' -d '
+curl -XPUT 'http://localhost:9200/twitter/tweet/2' -d '
 {
     "user": "kimchy",
     "postDate": "2009-11-15T14:12:12",
@@ -70,9 +70,9 @@ curl -XPUT 'http://127.0.0.1:9200/twitter/tweet/2' -d '
 Now, let's see if the information was added by GETting it:
 
 
-curl -XGET 'http://127.0.0.1:9200/twitter/user/kimchy?pretty=true'
-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/1?pretty=true'
-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/2?pretty=true'
+curl -XGET 'http://localhost:9200/twitter/user/kimchy?pretty=true'
+curl -XGET 'http://localhost:9200/twitter/tweet/1?pretty=true'
+curl -XGET 'http://localhost:9200/twitter/tweet/2?pretty=true'
 
h3. Searching @@ -81,13 +81,13 @@ Mmm search..., shouldn't it be elastic? Let's find all the tweets that @kimchy@ posted:
-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/_search?q=user:kimchy&pretty=true'
+curl -XGET 'http://localhost:9200/twitter/tweet/_search?q=user:kimchy&pretty=true'
 
We can also use the JSON query language Elasticsearch provides instead of a query string:
-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/_search?pretty=true' -d '
+curl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty=true' -d '
 {
     "query" : {
         "match" : { "user": "kimchy" }
@@ -98,7 +98,7 @@ curl -XGET 'http://127.0.0.1:9200/twitter/tweet/_search?pretty=true' -d '
 Just for kicks, let's get all the documents stored (we should see the user as well):
 
 
-curl -XGET 'http://127.0.0.1:9200/twitter/_search?pretty=true' -d '
+curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
 {
     "query" : {
         "matchAll" : {}
@@ -109,7 +109,7 @@ curl -XGET 'http://127.0.0.1:9200/twitter/_search?pretty=true' -d '
 We can also do range search (the @postDate@ was automatically identified as date)
 
 
-curl -XGET 'http://127.0.0.1:9200/twitter/_search?pretty=true' -d '
+curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
 {
     "query" : {
         "range" : {
@@ -130,16 +130,16 @@ Elasticsearch supports multiple indices, as well as multiple types per index. In
 Another way to define our simple twitter system is to have a different index per user (note, though that each index has an overhead). Here is the indexing curl's in this case:
 
 
-curl -XPUT 'http://127.0.0.1:9200/kimchy/info/1' -d '{ "name" : "Shay Banon" }'
+curl -XPUT 'http://localhost:9200/kimchy/info/1' -d '{ "name" : "Shay Banon" }'
 
-curl -XPUT 'http://127.0.0.1:9200/kimchy/tweet/1' -d '
+curl -XPUT 'http://localhost:9200/kimchy/tweet/1' -d '
 {
     "user": "kimchy",
     "postDate": "2009-11-15T13:12:00",
     "message": "Trying out Elasticsearch, so far so good?"
 }'
 
-curl -XPUT 'http://127.0.0.1:9200/kimchy/tweet/2' -d '
+curl -XPUT 'http://localhost:9200/kimchy/tweet/2' -d '
 {
     "user": "kimchy",
     "postDate": "2009-11-15T14:12:12",
@@ -152,7 +152,7 @@ The above will index information into the @kimchy@ index, with two types, @info@
 Complete control on the index level is allowed. As an example, in the above case, we would want to change from the default 5 shards with 1 replica per index, to only 1 shard with 1 replica per index (== per twitter user). Here is how this can be done (the configuration can be in yaml as well):
 
 
-curl -XPUT http://127.0.0.1:9200/another_user/ -d '
+curl -XPUT http://localhost:9200/another_user/ -d '
 {
     "index" : {
         "numberOfShards" : 1,
@@ -165,7 +165,7 @@ Search (and similar operations) are multi index aware. This means that we can ea
 index (twitter user), for example:
 
 
-curl -XGET 'http://127.0.0.1:9200/kimchy,another_user/_search?pretty=true' -d '
+curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -d '
 {
     "query" : {
         "matchAll" : {}
@@ -176,7 +176,7 @@ curl -XGET 'http://127.0.0.1:9200/kimchy,another_user/_search?pretty=true' -d '
 Or on all the indices:
 
 
-curl -XGET 'http://127.0.0.1:9200/_search?pretty=true' -d '
+curl -XGET 'http://localhost:9200/_search?pretty=true' -d '
 {
     "query" : {
         "matchAll" : {}
diff --git a/core/README.textile b/core/README.textile
index b2873e8b56e..720f357406b 100644
--- a/core/README.textile
+++ b/core/README.textile
@@ -42,7 +42,7 @@ h3. Installation
 
 * "Download":https://www.elastic.co/downloads/elasticsearch and unzip the Elasticsearch official distribution.
 * Run @bin/elasticsearch@ on unix, or @bin\elasticsearch.bat@ on windows.
-* Run @curl -X GET http://127.0.0.1:9200/@.
+* Run @curl -X GET http://localhost:9200/@.
 * Start more servers ...
 
 h3. Indexing
@@ -50,16 +50,16 @@ h3. Indexing
 Let's try and index some twitter like information. First, let's create a twitter user, and add some tweets (the @twitter@ index will be created automatically):
 
 
-curl -XPUT 'http://127.0.0.1:9200/twitter/user/kimchy' -d '{ "name" : "Shay Banon" }'
+curl -XPUT 'http://localhost:9200/twitter/user/kimchy' -d '{ "name" : "Shay Banon" }'
 
-curl -XPUT 'http://127.0.0.1:9200/twitter/tweet/1' -d '
+curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '
 {
     "user": "kimchy",
     "postDate": "2009-11-15T13:12:00",
     "message": "Trying out Elasticsearch, so far so good?"
 }'
 
-curl -XPUT 'http://127.0.0.1:9200/twitter/tweet/2' -d '
+curl -XPUT 'http://localhost:9200/twitter/tweet/2' -d '
 {
     "user": "kimchy",
     "postDate": "2009-11-15T14:12:12",
@@ -70,9 +70,9 @@ curl -XPUT 'http://127.0.0.1:9200/twitter/tweet/2' -d '
 Now, let's see if the information was added by GETting it:
 
 
-curl -XGET 'http://127.0.0.1:9200/twitter/user/kimchy?pretty=true'
-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/1?pretty=true'
-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/2?pretty=true'
+curl -XGET 'http://localhost:9200/twitter/user/kimchy?pretty=true'
+curl -XGET 'http://localhost:9200/twitter/tweet/1?pretty=true'
+curl -XGET 'http://localhost:9200/twitter/tweet/2?pretty=true'
 
h3. Searching @@ -81,13 +81,13 @@ Mmm search..., shouldn't it be elastic? Let's find all the tweets that @kimchy@ posted:
-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/_search?q=user:kimchy&pretty=true'
+curl -XGET 'http://localhost:9200/twitter/tweet/_search?q=user:kimchy&pretty=true'
 
We can also use the JSON query language Elasticsearch provides instead of a query string:
-curl -XGET 'http://127.0.0.1:9200/twitter/tweet/_search?pretty=true' -d '
+curl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty=true' -d '
 {
     "query" : {
         "match" : { "user": "kimchy" }
@@ -98,7 +98,7 @@ curl -XGET 'http://127.0.0.1:9200/twitter/tweet/_search?pretty=true' -d '
 Just for kicks, let's get all the documents stored (we should see the user as well):
 
 
-curl -XGET 'http://127.0.0.1:9200/twitter/_search?pretty=true' -d '
+curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
 {
     "query" : {
         "matchAll" : {}
@@ -109,7 +109,7 @@ curl -XGET 'http://127.0.0.1:9200/twitter/_search?pretty=true' -d '
 We can also do range search (the @postDate@ was automatically identified as date)
 
 
-curl -XGET 'http://127.0.0.1:9200/twitter/_search?pretty=true' -d '
+curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
 {
     "query" : {
         "range" : {
@@ -130,16 +130,16 @@ Elasticsearch supports multiple indices, as well as multiple types per index. In
 Another way to define our simple twitter system is to have a different index per user (note, though that each index has an overhead). Here is the indexing curl's in this case:
 
 
-curl -XPUT 'http://127.0.0.1:9200/kimchy/info/1' -d '{ "name" : "Shay Banon" }'
+curl -XPUT 'http://localhost:9200/kimchy/info/1' -d '{ "name" : "Shay Banon" }'
 
-curl -XPUT 'http://127.0.0.1:9200/kimchy/tweet/1' -d '
+curl -XPUT 'http://localhost:9200/kimchy/tweet/1' -d '
 {
     "user": "kimchy",
     "postDate": "2009-11-15T13:12:00",
     "message": "Trying out Elasticsearch, so far so good?"
 }'
 
-curl -XPUT 'http://127.0.0.1:9200/kimchy/tweet/2' -d '
+curl -XPUT 'http://localhost:9200/kimchy/tweet/2' -d '
 {
     "user": "kimchy",
     "postDate": "2009-11-15T14:12:12",
@@ -152,7 +152,7 @@ The above will index information into the @kimchy@ index, with two types, @info@
 Complete control on the index level is allowed. As an example, in the above case, we would want to change from the default 5 shards with 1 replica per index, to only 1 shard with 1 replica per index (== per twitter user). Here is how this can be done (the configuration can be in yaml as well):
 
 
-curl -XPUT http://127.0.0.1:9200/another_user/ -d '
+curl -XPUT http://localhost:9200/another_user/ -d '
 {
     "index" : {
         "numberOfShards" : 1,
@@ -165,7 +165,7 @@ Search (and similar operations) are multi index aware. This means that we can ea
 index (twitter user), for example:
 
 
-curl -XGET 'http://127.0.0.1:9200/kimchy,another_user/_search?pretty=true' -d '
+curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -d '
 {
     "query" : {
         "matchAll" : {}
@@ -176,7 +176,7 @@ curl -XGET 'http://127.0.0.1:9200/kimchy,another_user/_search?pretty=true' -d '
 Or on all the indices:
 
 
-curl -XGET 'http://127.0.0.1:9200/_search?pretty=true' -d '
+curl -XGET 'http://localhost:9200/_search?pretty=true' -d '
 {
     "query" : {
         "matchAll" : {}
diff --git a/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNode.java b/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNode.java
index 8d63654e07e..87fe5017fa6 100644
--- a/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNode.java
+++ b/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNode.java
@@ -21,6 +21,7 @@ package org.elasticsearch.cluster.node;
 
 import com.google.common.collect.ImmutableList;
 import com.google.common.collect.ImmutableMap;
+
 import org.elasticsearch.Version;
 import org.elasticsearch.common.Booleans;
 import org.elasticsearch.common.Strings;
@@ -33,6 +34,7 @@ import org.elasticsearch.common.xcontent.ToXContent;
 import org.elasticsearch.common.xcontent.XContentBuilder;
 
 import java.io.IOException;
+import java.net.InetAddress;
 import java.util.Map;
 
 import static org.elasticsearch.common.transport.TransportAddressSerializers.addressToStream;
@@ -136,7 +138,7 @@ public class DiscoveryNode implements Streamable, ToXContent {
      * @param version    the version of the node.
      */
     public DiscoveryNode(String nodeName, String nodeId, TransportAddress address, Map attributes, Version version) {
-        this(nodeName, nodeId, NetworkUtils.getLocalHostName(""), NetworkUtils.getLocalHostAddress(""), address, attributes, version);
+        this(nodeName, nodeId, NetworkUtils.getLocalHost().getHostName(), NetworkUtils.getLocalHost().getHostAddress(), address, attributes, version);
     }
 
     /**
diff --git a/core/src/main/java/org/elasticsearch/common/network/NetworkService.java b/core/src/main/java/org/elasticsearch/common/network/NetworkService.java
index bd45987f05c..9f6b77aa90d 100644
--- a/core/src/main/java/org/elasticsearch/common/network/NetworkService.java
+++ b/core/src/main/java/org/elasticsearch/common/network/NetworkService.java
@@ -28,11 +28,8 @@ import org.elasticsearch.common.unit.TimeValue;
 
 import java.io.IOException;
 import java.net.InetAddress;
-import java.net.NetworkInterface;
 import java.net.UnknownHostException;
-import java.util.Collection;
 import java.util.List;
-import java.util.Locale;
 import java.util.concurrent.CopyOnWriteArrayList;
 import java.util.concurrent.TimeUnit;
 
@@ -41,7 +38,8 @@ import java.util.concurrent.TimeUnit;
  */
 public class NetworkService extends AbstractComponent {
 
-    public static final String LOCAL = "#local#";
+    /** By default, we bind to loopback interfaces */
+    public static final String DEFAULT_NETWORK_HOST = "_local_";
 
     private static final String GLOBAL_NETWORK_HOST_SETTING = "network.host";
     private static final String GLOBAL_NETWORK_BINDHOST_SETTING = "network.bind_host";
@@ -71,12 +69,12 @@ public class NetworkService extends AbstractComponent {
         /**
          * Resolves the default value if possible. If not, return null.
          */
-        InetAddress resolveDefault();
+        InetAddress[] resolveDefault();
 
         /**
          * Resolves a custom value handling, return null if can't handle it.
          */
-        InetAddress resolveIfPossible(String value);
+        InetAddress[] resolveIfPossible(String value);
     }
 
     private final List customNameResolvers = new CopyOnWriteArrayList<>();
@@ -94,100 +92,86 @@ public class NetworkService extends AbstractComponent {
         customNameResolvers.add(customNameResolver);
     }
 
-
-    public InetAddress resolveBindHostAddress(String bindHost) throws IOException {
-        return resolveBindHostAddress(bindHost, InetAddress.getLoopbackAddress().getHostAddress());
-    }
-
-    public InetAddress resolveBindHostAddress(String bindHost, String defaultValue2) throws IOException {
-        return resolveInetAddress(bindHost, settings.get(GLOBAL_NETWORK_BINDHOST_SETTING, settings.get(GLOBAL_NETWORK_HOST_SETTING)), defaultValue2);
-    }
-
-    public InetAddress resolvePublishHostAddress(String publishHost) throws IOException {
-        InetAddress address = resolvePublishHostAddress(publishHost,
-                InetAddress.getLoopbackAddress().getHostAddress());
-        // verify that its not a local address
-        if (address == null || address.isAnyLocalAddress()) {
-            address = NetworkUtils.getFirstNonLoopbackAddress(NetworkUtils.StackType.IPv4);
-            if (address == null) {
-                address = NetworkUtils.getFirstNonLoopbackAddress(NetworkUtils.getIpStackType());
-                if (address == null) {
-                    address = NetworkUtils.getLocalAddress();
-                    if (address == null) {
-                        return NetworkUtils.getLocalhost(NetworkUtils.StackType.IPv4);
-                    }
-                }
-            }
+    public InetAddress[] resolveBindHostAddress(String bindHost) throws IOException {
+        // first check settings
+        if (bindHost == null) {
+            bindHost = settings.get(GLOBAL_NETWORK_BINDHOST_SETTING, settings.get(GLOBAL_NETWORK_HOST_SETTING));
         }
-        return address;
-    }
-
-    public InetAddress resolvePublishHostAddress(String publishHost, String defaultValue2) throws IOException {
-        return resolveInetAddress(publishHost, settings.get(GLOBAL_NETWORK_PUBLISHHOST_SETTING, settings.get(GLOBAL_NETWORK_HOST_SETTING)), defaultValue2);
-    }
-
-    public InetAddress resolveInetAddress(String host, String defaultValue1, String defaultValue2) throws UnknownHostException, IOException {
-        if (host == null) {
-            host = defaultValue1;
-        }
-        if (host == null) {
-            host = defaultValue2;
-        }
-        if (host == null) {
+        // next check any registered custom resolvers
+        if (bindHost == null) {
             for (CustomNameResolver customNameResolver : customNameResolvers) {
-                InetAddress inetAddress = customNameResolver.resolveDefault();
-                if (inetAddress != null) {
-                    return inetAddress;
+                InetAddress addresses[] = customNameResolver.resolveDefault();
+                if (addresses != null) {
+                    return addresses;
                 }
             }
-            return null;
         }
-        String origHost = host;
+        // finally, fill with our default
+        if (bindHost == null) {
+            bindHost = DEFAULT_NETWORK_HOST;
+        }
+        return resolveInetAddress(bindHost);
+    }
+
+    // TODO: needs to be InetAddress[]
+    public InetAddress resolvePublishHostAddress(String publishHost) throws IOException {
+        // first check settings
+        if (publishHost == null) {
+            publishHost = settings.get(GLOBAL_NETWORK_PUBLISHHOST_SETTING, settings.get(GLOBAL_NETWORK_HOST_SETTING));
+        }
+        // next check any registered custom resolvers
+        if (publishHost == null) {
+            for (CustomNameResolver customNameResolver : customNameResolvers) {
+                InetAddress addresses[] = customNameResolver.resolveDefault();
+                if (addresses != null) {
+                    return addresses[0];
+                }
+            }
+        }
+        // finally, fill with our default
+        if (publishHost == null) {
+            publishHost = DEFAULT_NETWORK_HOST;
+        }
+        // TODO: allow publishing multiple addresses
+        return resolveInetAddress(publishHost)[0];
+    }
+
+    private InetAddress[] resolveInetAddress(String host) throws UnknownHostException, IOException {
         if ((host.startsWith("#") && host.endsWith("#")) || (host.startsWith("_") && host.endsWith("_"))) {
             host = host.substring(1, host.length() - 1);
-
+            // allow custom resolvers to have special names
             for (CustomNameResolver customNameResolver : customNameResolvers) {
-                InetAddress inetAddress = customNameResolver.resolveIfPossible(host);
-                if (inetAddress != null) {
-                    return inetAddress;
+                InetAddress addresses[] = customNameResolver.resolveIfPossible(host);
+                if (addresses != null) {
+                    return addresses;
                 }
             }
-
-            if (host.equals("local")) {
-                return NetworkUtils.getLocalAddress();
-            } else if (host.startsWith("non_loopback")) {
-                if (host.toLowerCase(Locale.ROOT).endsWith(":ipv4")) {
-                    return NetworkUtils.getFirstNonLoopbackAddress(NetworkUtils.StackType.IPv4);
-                } else if (host.toLowerCase(Locale.ROOT).endsWith(":ipv6")) {
-                    return NetworkUtils.getFirstNonLoopbackAddress(NetworkUtils.StackType.IPv6);
-                } else {
-                    return NetworkUtils.getFirstNonLoopbackAddress(NetworkUtils.getIpStackType());
-                }
-            } else {
-                NetworkUtils.StackType stackType = NetworkUtils.getIpStackType();
-                if (host.toLowerCase(Locale.ROOT).endsWith(":ipv4")) {
-                    stackType = NetworkUtils.StackType.IPv4;
-                    host = host.substring(0, host.length() - 5);
-                } else if (host.toLowerCase(Locale.ROOT).endsWith(":ipv6")) {
-                    stackType = NetworkUtils.StackType.IPv6;
-                    host = host.substring(0, host.length() - 5);
-                }
-                Collection allInterfs = NetworkUtils.getAllAvailableInterfaces();
-                for (NetworkInterface ni : allInterfs) {
-                    if (!ni.isUp()) {
-                        continue;
+            switch (host) {
+                case "local":
+                    return NetworkUtils.getLoopbackAddresses();
+                case "local:ipv4":
+                    return NetworkUtils.filterIPV4(NetworkUtils.getLoopbackAddresses());
+                case "local:ipv6":
+                    return NetworkUtils.filterIPV6(NetworkUtils.getLoopbackAddresses());
+                case "non_loopback":
+                    return NetworkUtils.getFirstNonLoopbackAddresses();
+                case "non_loopback:ipv4":
+                    return NetworkUtils.filterIPV4(NetworkUtils.getFirstNonLoopbackAddresses());
+                case "non_loopback:ipv6":
+                    return NetworkUtils.filterIPV6(NetworkUtils.getFirstNonLoopbackAddresses());
+                default:
+                    /* an interface specification */
+                    if (host.endsWith(":ipv4")) {
+                        host = host.substring(0, host.length() - 5);
+                        return NetworkUtils.filterIPV4(NetworkUtils.getAddressesForInterface(host));
+                    } else if (host.endsWith(":ipv6")) {
+                        host = host.substring(0, host.length() - 5);
+                        return NetworkUtils.filterIPV6(NetworkUtils.getAddressesForInterface(host));
+                    } else {
+                        return NetworkUtils.getAddressesForInterface(host);
                     }
-                    if (host.equals(ni.getName()) || host.equals(ni.getDisplayName())) {
-                        if (ni.isLoopback()) {
-                            return NetworkUtils.getFirstAddress(ni, stackType);
-                        } else {
-                            return NetworkUtils.getFirstNonLoopbackAddress(ni, stackType);
-                        }
-                    }
-                }
             }
-            throw new IOException("Failed to find network interface for [" + origHost + "]");
         }
-        return InetAddress.getByName(host);
+        return NetworkUtils.getAllByName(host);
     }
 }
diff --git a/core/src/main/java/org/elasticsearch/common/network/NetworkUtils.java b/core/src/main/java/org/elasticsearch/common/network/NetworkUtils.java
index 67710f9da6b..14c8d3d794e 100644
--- a/core/src/main/java/org/elasticsearch/common/network/NetworkUtils.java
+++ b/core/src/main/java/org/elasticsearch/common/network/NetworkUtils.java
@@ -19,303 +19,205 @@
 
 package org.elasticsearch.common.network;
 
-import com.google.common.collect.Lists;
 import org.apache.lucene.util.BytesRef;
-import org.apache.lucene.util.CollectionUtil;
 import org.apache.lucene.util.Constants;
 import org.elasticsearch.common.logging.ESLogger;
 import org.elasticsearch.common.logging.Loggers;
 
-import java.net.*;
-import java.util.*;
+import java.net.Inet4Address;
+import java.net.Inet6Address;
+import java.net.InetAddress;
+import java.net.NetworkInterface;
+import java.net.SocketException;
+import java.net.UnknownHostException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.List;
 
 /**
- *
+ * Utilities for network interfaces / addresses
  */
 public abstract class NetworkUtils {
 
+    /** no instantation */
+    private NetworkUtils() {}
+    
+    /**
+     * By default we bind to any addresses on an interface/name, unless restricted by :ipv4 etc.
+     * This property is unrelated to that, this is about what we *publish*. Today the code pretty much
+     * expects one address so this is used for the sort order.
+     * @deprecated transition mechanism only
+     */
+    @Deprecated
+    static final boolean PREFER_V4 = Boolean.parseBoolean(System.getProperty("java.net.preferIPv4Stack", "true")); 
+    
+    /** Sorts an address by preference. This way code like publishing can just pick the first one */
+    static int sortKey(InetAddress address, boolean prefer_v4) {
+        int key = address.getAddress().length;
+        if (prefer_v4 == false) {
+            key = -key;
+        }
+        
+        if (address.isAnyLocalAddress()) {
+            key += 5;
+        }
+        if (address.isMulticastAddress()) {
+            key += 4;
+        }
+        if (address.isLoopbackAddress()) {
+            key += 3;
+        }
+        if (address.isLinkLocalAddress()) {
+            key += 2;
+        }
+        if (address.isSiteLocalAddress()) {
+            key += 1;
+        }
+
+        return key;
+    }
+
+    /** 
+     * Sorts addresses by order of preference. This is used to pick the first one for publishing
+     * @deprecated remove this when multihoming is really correct
+     */
+    @Deprecated
+    private static void sortAddresses(List list) {
+        Collections.sort(list, new Comparator() {
+            @Override
+            public int compare(InetAddress left, InetAddress right) {
+                int cmp = Integer.compare(sortKey(left, PREFER_V4), sortKey(right, PREFER_V4));
+                if (cmp == 0) {
+                    cmp = new BytesRef(left.getAddress()).compareTo(new BytesRef(right.getAddress()));
+                }
+                return cmp;
+            }
+        });
+    }
+    
     private final static ESLogger logger = Loggers.getLogger(NetworkUtils.class);
 
-    public static enum StackType {
-        IPv4, IPv6, Unknown
+    /** Return all interfaces (and subinterfaces) on the system */
+    static List getInterfaces() throws SocketException {
+        List all = new ArrayList<>();
+        addAllInterfaces(all, Collections.list(NetworkInterface.getNetworkInterfaces()));
+        Collections.sort(all, new Comparator() {
+            @Override
+            public int compare(NetworkInterface left, NetworkInterface right) {
+                return Integer.compare(left.getIndex(), right.getIndex());
+            }
+        });
+        return all;
     }
-
-    public static final String IPv4_SETTING = "java.net.preferIPv4Stack";
-    public static final String IPv6_SETTING = "java.net.preferIPv6Addresses";
-
-    public static final String NON_LOOPBACK_ADDRESS = "non_loopback_address";
-
-    private final static InetAddress localAddress;
-
-    static {
-        InetAddress localAddressX;
-        try {
-            localAddressX = InetAddress.getLocalHost();
-        } catch (Throwable e) {
-            logger.warn("failed to resolve local host, fallback to loopback", e);
-            localAddressX = InetAddress.getLoopbackAddress();
+    
+    /** Helper for getInterfaces, recursively adds subinterfaces to {@code target} */
+    private static void addAllInterfaces(List target, List level) {
+        if (!level.isEmpty()) {
+            target.addAll(level);
+            for (NetworkInterface intf : level) {
+                addAllInterfaces(target, Collections.list(intf.getSubInterfaces()));
+            }
         }
-        localAddress = localAddressX;
     }
-
+    
+    /** Returns system default for SO_REUSEADDR */
     public static boolean defaultReuseAddress() {
         return Constants.WINDOWS ? false : true;
     }
-
-    public static boolean isIPv4() {
-        return System.getProperty("java.net.preferIPv4Stack") != null && System.getProperty("java.net.preferIPv4Stack").equals("true");
-    }
-
-    public static InetAddress getIPv4Localhost() throws UnknownHostException {
-        return getLocalhost(StackType.IPv4);
-    }
-
-    public static InetAddress getIPv6Localhost() throws UnknownHostException {
-        return getLocalhost(StackType.IPv6);
-    }
-
-    public static InetAddress getLocalAddress() {
-        return localAddress;
-    }
-
-    public static String getLocalHostName(String defaultHostName) {
-        if (localAddress == null) {
-            return defaultHostName;
-        }
-        String hostName = localAddress.getHostName();
-        if (hostName == null) {
-            return defaultHostName;
-        }
-        return hostName;
-    }
-
-    public static String getLocalHostAddress(String defaultHostAddress) {
-        if (localAddress == null) {
-            return defaultHostAddress;
-        }
-        String hostAddress = localAddress.getHostAddress();
-        if (hostAddress == null) {
-            return defaultHostAddress;
-        }
-        return hostAddress;
-    }
-
-    public static InetAddress getLocalhost(StackType ip_version) throws UnknownHostException {
-        if (ip_version == StackType.IPv4)
-            return InetAddress.getByName("127.0.0.1");
-        else
-            return InetAddress.getByName("::1");
-    }
-
-    /**
-     * Returns the first non-loopback address on any interface on the current host.
-     *
-     * @param ip_version Constraint on IP version of address to be returned, 4 or 6
-     */
-    public static InetAddress getFirstNonLoopbackAddress(StackType ip_version) throws SocketException {
-        InetAddress address;
-        for (NetworkInterface intf : getInterfaces()) {
-            try {
-                if (!intf.isUp() || intf.isLoopback())
-                    continue;
-            } catch (Exception e) {
-                // might happen when calling on a network interface that does not exists
-                continue;
-            }
-            address = getFirstNonLoopbackAddress(intf, ip_version);
-            if (address != null) {
-                return address;
-            }
-        }
-
-        return null;
-    }
-
-    private static List getInterfaces() throws SocketException {
-        Enumeration intfs = NetworkInterface.getNetworkInterfaces();
-
-        List intfsList = Lists.newArrayList();
-        while (intfs.hasMoreElements()) {
-            intfsList.add((NetworkInterface) intfs.nextElement());
-        }
-
-        sortInterfaces(intfsList);
-        return intfsList;
-    }
-
-    private static void sortInterfaces(List intfsList) {
-        // order by index, assuming first ones are more interesting
-        CollectionUtil.timSort(intfsList, new Comparator() {
-            @Override
-            public int compare(NetworkInterface o1, NetworkInterface o2) {
-                return Integer.compare (o1.getIndex(), o2.getIndex());
-            }
-        });
-    }
-
-
-    /**
-     * Returns the first non-loopback address on the given interface on the current host.
-     *
-     * @param intf      the interface to be checked
-     * @param ipVersion Constraint on IP version of address to be returned, 4 or 6
-     */
-    public static InetAddress getFirstNonLoopbackAddress(NetworkInterface intf, StackType ipVersion) throws SocketException {
-        if (intf == null)
-            throw new IllegalArgumentException("Network interface pointer is null");
-
-        for (Enumeration addresses = intf.getInetAddresses(); addresses.hasMoreElements(); ) {
-            InetAddress address = (InetAddress) addresses.nextElement();
-            if (!address.isLoopbackAddress()) {
-                if ((address instanceof Inet4Address && ipVersion == StackType.IPv4) ||
-                        (address instanceof Inet6Address && ipVersion == StackType.IPv6))
-                    return address;
-            }
-        }
-        return null;
-    }
-
-    /**
-     * Returns the first address with the proper ipVersion on the given interface on the current host.
-     *
-     * @param intf      the interface to be checked
-     * @param ipVersion Constraint on IP version of address to be returned, 4 or 6
-     */
-    public static InetAddress getFirstAddress(NetworkInterface intf, StackType ipVersion) throws SocketException {
-        if (intf == null)
-            throw new IllegalArgumentException("Network interface pointer is null");
-
-        for (Enumeration addresses = intf.getInetAddresses(); addresses.hasMoreElements(); ) {
-            InetAddress address = (InetAddress) addresses.nextElement();
-            if ((address instanceof Inet4Address && ipVersion == StackType.IPv4) ||
-                    (address instanceof Inet6Address && ipVersion == StackType.IPv6))
-                return address;
-        }
-        return null;
-    }
-
-    /**
-     * A function to check if an interface supports an IP version (i.e has addresses
-     * defined for that IP version).
-     *
-     * @param intf
-     * @return
-     */
-    public static boolean interfaceHasIPAddresses(NetworkInterface intf, StackType ipVersion) throws SocketException, UnknownHostException {
-        boolean supportsVersion = false;
-        if (intf != null) {
-            // get all the InetAddresses defined on the interface
-            Enumeration addresses = intf.getInetAddresses();
-            while (addresses != null && addresses.hasMoreElements()) {
-                // get the next InetAddress for the current interface
-                InetAddress address = (InetAddress) addresses.nextElement();
-
-                // check if we find an address of correct version
-                if ((address instanceof Inet4Address && (ipVersion == StackType.IPv4)) ||
-                        (address instanceof Inet6Address && (ipVersion == StackType.IPv6))) {
-                    supportsVersion = true;
-                    break;
-                }
-            }
-        } else {
-            throw new UnknownHostException("network interface not found");
-        }
-        return supportsVersion;
-    }
-
-    /**
-     * Tries to determine the type of IP stack from the available interfaces and their addresses and from the
-     * system properties (java.net.preferIPv4Stack and java.net.preferIPv6Addresses)
-     *
-     * @return StackType.IPv4 for an IPv4 only stack, StackYTypeIPv6 for an IPv6 only stack, and StackType.Unknown
-     * if the type cannot be detected
-     */
-    public static StackType getIpStackType() {
-        boolean isIPv4StackAvailable = isStackAvailable(true);
-        boolean isIPv6StackAvailable = isStackAvailable(false);
-
-        // if only IPv4 stack available
-        if (isIPv4StackAvailable && !isIPv6StackAvailable) {
-            return StackType.IPv4;
-        }
-        // if only IPv6 stack available
-        else if (isIPv6StackAvailable && !isIPv4StackAvailable) {
-            return StackType.IPv6;
-        }
-        // if dual stack
-        else if (isIPv4StackAvailable && isIPv6StackAvailable) {
-            // get the System property which records user preference for a stack on a dual stack machine
-            if (Boolean.getBoolean(IPv4_SETTING)) // has preference over java.net.preferIPv6Addresses
-                return StackType.IPv4;
-            if (Boolean.getBoolean(IPv6_SETTING))
-                return StackType.IPv6;
-            return StackType.IPv6;
-        }
-        return StackType.Unknown;
-    }
-
-
-    public static boolean isStackAvailable(boolean ipv4) {
-        Collection allAddrs = getAllAvailableAddresses();
-        for (InetAddress addr : allAddrs)
-            if (ipv4 && addr instanceof Inet4Address || (!ipv4 && addr instanceof Inet6Address))
-                return true;
-        return false;
-    }
-
-
-    /**
-     * Returns all the available interfaces, including first level sub interfaces.
-     */
-    public static List getAllAvailableInterfaces() throws SocketException {
-        List allInterfaces = new ArrayList<>();
-        for (Enumeration interfaces = NetworkInterface.getNetworkInterfaces(); interfaces.hasMoreElements(); ) {
-            NetworkInterface intf = interfaces.nextElement();
-            allInterfaces.add(intf);
-
-            Enumeration subInterfaces = intf.getSubInterfaces();
-            if (subInterfaces != null && subInterfaces.hasMoreElements()) {
-                while (subInterfaces.hasMoreElements()) {
-                    allInterfaces.add(subInterfaces.nextElement());
-                }
-            }
-        }
-        sortInterfaces(allInterfaces);
-        return allInterfaces;
-    }
-
-    public static Collection getAllAvailableAddresses() {
-        // we want consistent order here.
-        final Set retval = new TreeSet<>(new Comparator() {
-            BytesRef left = new BytesRef();
-            BytesRef right = new BytesRef();
-            @Override
-            public int compare(InetAddress o1, InetAddress o2) {
-                return set(left, o1).compareTo(set(right, o1));
-            }
-
-            private BytesRef set(BytesRef ref, InetAddress addr) {
-                ref.bytes = addr.getAddress();
-                ref.offset = 0;
-                ref.length = ref.bytes.length;
-                return ref;
-            }
-        });
+    
+    /** Returns localhost, or if its misconfigured, falls back to loopback. Use with caution!!!! */
+    // TODO: can we remove this?
+    public static InetAddress getLocalHost() {
         try {
-            for (NetworkInterface intf : getInterfaces()) {
-                Enumeration addrs = intf.getInetAddresses();
-                while (addrs.hasMoreElements())
-                    retval.add(addrs.nextElement());
-            }
-        } catch (SocketException e) {
-            logger.warn("Failed to derive all available interfaces", e);
+            return InetAddress.getLocalHost();
+        } catch (UnknownHostException e) {
+            logger.warn("failed to resolve local host, fallback to loopback", e);
+            return InetAddress.getLoopbackAddress();
         }
-
-        return retval;
     }
-
-
-    private NetworkUtils() {
-
+    
+    /** Returns addresses for all loopback interfaces that are up. */
+    public static InetAddress[] getLoopbackAddresses() throws SocketException {
+        List list = new ArrayList<>();
+        for (NetworkInterface intf : getInterfaces()) {
+            if (intf.isLoopback() && intf.isUp()) {
+                list.addAll(Collections.list(intf.getInetAddresses()));
+            }
+        }
+        if (list.isEmpty()) {
+            throw new IllegalArgumentException("No up-and-running loopback interfaces found, got " + getInterfaces());
+        }
+        sortAddresses(list);
+        return list.toArray(new InetAddress[list.size()]);
+    }
+    
+    /** Returns addresses for the first non-loopback interface that is up. */
+    public static InetAddress[] getFirstNonLoopbackAddresses() throws SocketException {
+        List list = new ArrayList<>();
+        for (NetworkInterface intf : getInterfaces()) {
+            if (intf.isLoopback() == false && intf.isUp()) {
+                list.addAll(Collections.list(intf.getInetAddresses()));
+                break;
+            }
+        }
+        if (list.isEmpty()) {
+            throw new IllegalArgumentException("No up-and-running non-loopback interfaces found, got " + getInterfaces());
+        }
+        sortAddresses(list);
+        return list.toArray(new InetAddress[list.size()]);
+    }
+    
+    /** Returns addresses for the given interface (it must be marked up) */
+    public static InetAddress[] getAddressesForInterface(String name) throws SocketException {
+        NetworkInterface intf = NetworkInterface.getByName(name);
+        if (intf == null) {
+            throw new IllegalArgumentException("No interface named '" + name + "' found, got " + getInterfaces());
+        }
+        if (!intf.isUp()) {
+            throw new IllegalArgumentException("Interface '" + name + "' is not up and running");
+        }
+        List list = Collections.list(intf.getInetAddresses());
+        if (list.isEmpty()) {
+            throw new IllegalArgumentException("Interface '" + name + "' has no internet addresses");
+        }
+        sortAddresses(list);
+        return list.toArray(new InetAddress[list.size()]);
+    }
+    
+    /** Returns addresses for the given host, sorted by order of preference */
+    public static InetAddress[] getAllByName(String host) throws UnknownHostException {
+        InetAddress addresses[] = InetAddress.getAllByName(host);
+        sortAddresses(Arrays.asList(addresses));
+        return addresses;
+    }
+    
+    /** Returns only the IPV4 addresses in {@code addresses} */
+    public static InetAddress[] filterIPV4(InetAddress addresses[]) {
+        List list = new ArrayList<>();
+        for (InetAddress address : addresses) {
+            if (address instanceof Inet4Address) {
+                list.add(address);
+            }
+        }
+        if (list.isEmpty()) {
+            throw new IllegalArgumentException("No ipv4 addresses found in " + Arrays.toString(addresses));
+        }
+        return list.toArray(new InetAddress[list.size()]);
+    }
+    
+    /** Returns only the IPV6 addresses in {@code addresses} */
+    public static InetAddress[] filterIPV6(InetAddress addresses[]) {
+        List list = new ArrayList<>();
+        for (InetAddress address : addresses) {
+            if (address instanceof Inet6Address) {
+                list.add(address);
+            }
+        }
+        if (list.isEmpty()) {
+            throw new IllegalArgumentException("No ipv6 addresses found in " + Arrays.toString(addresses));
+        }
+        return list.toArray(new InetAddress[list.size()]);
     }
 }
diff --git a/core/src/main/java/org/elasticsearch/discovery/zen/ping/multicast/MulticastZenPing.java b/core/src/main/java/org/elasticsearch/discovery/zen/ping/multicast/MulticastZenPing.java
index 97f872c3108..26e3a6ded76 100644
--- a/core/src/main/java/org/elasticsearch/discovery/zen/ping/multicast/MulticastZenPing.java
+++ b/core/src/main/java/org/elasticsearch/discovery/zen/ping/multicast/MulticastZenPing.java
@@ -131,7 +131,9 @@ public class MulticastZenPing extends AbstractLifecycleComponent implem
             boolean deferToInterface = settings.getAsBoolean("discovery.zen.ping.multicast.defer_group_to_set_interface", Constants.MAC_OS_X);
             multicastChannel = MulticastChannel.getChannel(nodeName(), shared,
                     new MulticastChannel.Config(port, group, bufferSize, ttl,
-                            networkService.resolvePublishHostAddress(address),
+                            // don't use publish address, the use case for that is e.g. a firewall or proxy and
+                            // may not even be bound to an interface on this machine! use the first bound address.
+                            networkService.resolveBindHostAddress(address)[0],
                             deferToInterface),
                     new Receiver());
         } catch (Throwable t) {
diff --git a/core/src/main/java/org/elasticsearch/http/netty/NettyHttpServerTransport.java b/core/src/main/java/org/elasticsearch/http/netty/NettyHttpServerTransport.java
index a89c0209cd3..664f7a8d0e4 100644
--- a/core/src/main/java/org/elasticsearch/http/netty/NettyHttpServerTransport.java
+++ b/core/src/main/java/org/elasticsearch/http/netty/NettyHttpServerTransport.java
@@ -51,6 +51,10 @@ import org.jboss.netty.handler.timeout.ReadTimeoutException;
 import java.io.IOException;
 import java.net.InetAddress;
 import java.net.InetSocketAddress;
+import java.net.SocketAddress;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
 import java.util.concurrent.Executors;
 import java.util.concurrent.atomic.AtomicReference;
 
@@ -128,7 +132,7 @@ public class NettyHttpServerTransport extends AbstractLifecycleComponent serverChannels = new ArrayList<>();
 
     protected OpenChannelsHandler serverOpenChannels;
 
@@ -243,33 +247,18 @@ public class NettyHttpServerTransport extends AbstractLifecycleComponent lastException = new AtomicReference<>();
-        boolean success = portsRange.iterate(new PortsRange.PortCallback() {
-            @Override
-            public boolean onPortNumber(int portNumber) {
-                try {
-                    serverChannel = serverBootstrap.bind(new InetSocketAddress(hostAddress, portNumber));
-                } catch (Exception e) {
-                    lastException.set(e);
-                    return false;
-                }
-                return true;
-            }
-        });
-        if (!success) {
-            throw new BindHttpException("Failed to bind to [" + port + "]", lastException.get());
+        
+        for (InetAddress address : hostAddresses) {
+            bindAddress(address);
         }
 
-        InetSocketAddress boundAddress = (InetSocketAddress) serverChannel.getLocalAddress();
+        InetSocketAddress boundAddress = (InetSocketAddress) serverChannels.get(0).getLocalAddress();
         InetSocketAddress publishAddress;
         if (0 == publishPort) {
             publishPort = boundAddress.getPort();
@@ -281,12 +270,42 @@ public class NettyHttpServerTransport extends AbstractLifecycleComponent lastException = new AtomicReference<>();
+        final AtomicReference boundSocket = new AtomicReference<>();
+        boolean success = portsRange.iterate(new PortsRange.PortCallback() {
+            @Override
+            public boolean onPortNumber(int portNumber) {
+                try {
+                    synchronized (serverChannels) {
+                        Channel channel = serverBootstrap.bind(new InetSocketAddress(hostAddress, portNumber));
+                        serverChannels.add(channel);
+                        boundSocket.set(channel.getLocalAddress());
+                    }
+                } catch (Exception e) {
+                    lastException.set(e);
+                    return false;
+                }
+                return true;
+            }
+        });
+        if (!success) {
+            throw new BindHttpException("Failed to bind to [" + port + "]", lastException.get());
+        }
+        logger.info("Bound http to address [{}]", boundSocket.get());
+    }
 
     @Override
     protected void doStop() {
-        if (serverChannel != null) {
-            serverChannel.close().awaitUninterruptibly();
-            serverChannel = null;
+        synchronized (serverChannels) {
+            if (serverChannels != null) {
+                for (Channel channel : serverChannels) {
+                    channel.close().awaitUninterruptibly();
+                }
+                serverChannels = null;
+            }
         }
 
         if (serverOpenChannels != null) {
diff --git a/core/src/main/java/org/elasticsearch/transport/netty/NettyTransport.java b/core/src/main/java/org/elasticsearch/transport/netty/NettyTransport.java
index 0b85bb63211..8a43e0581bd 100644
--- a/core/src/main/java/org/elasticsearch/transport/netty/NettyTransport.java
+++ b/core/src/main/java/org/elasticsearch/transport/netty/NettyTransport.java
@@ -146,8 +146,8 @@ public class NettyTransport extends AbstractLifecycleComponent implem
     // node id to actual channel
     protected final ConcurrentMap connectedNodes = newConcurrentMap();
     protected final Map serverBootstraps = newConcurrentMap();
-    protected final Map serverChannels = newConcurrentMap();
-    protected final Map profileBoundAddresses = newConcurrentMap();
+    protected final Map> serverChannels = newConcurrentMap();
+    protected final ConcurrentMap profileBoundAddresses = newConcurrentMap();
     protected volatile TransportServiceAdapter transportServiceAdapter;
     protected volatile BoundTransportAddress boundAddress;
     protected final KeyedLock connectionLock = new KeyedLock<>();
@@ -286,7 +286,7 @@ public class NettyTransport extends AbstractLifecycleComponent implem
                     bindServerBootstrap(name, mergedSettings);
                 }
 
-                InetSocketAddress boundAddress = (InetSocketAddress) serverChannels.get(DEFAULT_PROFILE).getLocalAddress();
+                InetSocketAddress boundAddress = (InetSocketAddress) serverChannels.get(DEFAULT_PROFILE).get(0).getLocalAddress();
                 int publishPort = settings.getAsInt("transport.netty.publish_port", settings.getAsInt("transport.publish_port", boundAddress.getPort()));
                 String publishHost = settings.get("transport.netty.publish_host", settings.get("transport.publish_host", settings.get("transport.host")));
                 InetSocketAddress publishAddress = createPublishAddress(publishHost, publishPort);
@@ -397,23 +397,38 @@ public class NettyTransport extends AbstractLifecycleComponent implem
 
     private void bindServerBootstrap(final String name, final Settings settings) {
         // Bind and start to accept incoming connections.
-        InetAddress hostAddressX;
+        InetAddress hostAddresses[];
         String bindHost = settings.get("bind_host");
         try {
-            hostAddressX = networkService.resolveBindHostAddress(bindHost);
+            hostAddresses = networkService.resolveBindHostAddress(bindHost);
         } catch (IOException e) {
             throw new BindTransportException("Failed to resolve host [" + bindHost + "]", e);
         }
-        final InetAddress hostAddress = hostAddressX;
+        for (InetAddress hostAddress : hostAddresses) {
+            bindServerBootstrap(name, hostAddress, settings);
+        }
+    }
+        
+    private void bindServerBootstrap(final String name, final InetAddress hostAddress, Settings settings) {
 
         String port = settings.get("port");
         PortsRange portsRange = new PortsRange(port);
         final AtomicReference lastException = new AtomicReference<>();
+        final AtomicReference boundSocket = new AtomicReference<>();
         boolean success = portsRange.iterate(new PortsRange.PortCallback() {
             @Override
             public boolean onPortNumber(int portNumber) {
                 try {
-                    serverChannels.put(name, serverBootstraps.get(name).bind(new InetSocketAddress(hostAddress, portNumber)));
+                    Channel channel = serverBootstraps.get(name).bind(new InetSocketAddress(hostAddress, portNumber));
+                    synchronized (serverChannels) {
+                        List list = serverChannels.get(name);
+                        if (list == null) {
+                            list = new ArrayList<>();
+                            serverChannels.put(name, list);
+                        }
+                        list.add(channel);
+                        boundSocket.set(channel.getLocalAddress());
+                    }
                 } catch (Exception e) {
                     lastException.set(e);
                     return false;
@@ -426,14 +441,15 @@ public class NettyTransport extends AbstractLifecycleComponent implem
         }
 
         if (!DEFAULT_PROFILE.equals(name)) {
-            InetSocketAddress boundAddress = (InetSocketAddress) serverChannels.get(name).getLocalAddress();
+            InetSocketAddress boundAddress = (InetSocketAddress) boundSocket.get();
             int publishPort = settings.getAsInt("publish_port", boundAddress.getPort());
             String publishHost = settings.get("publish_host", boundAddress.getHostString());
             InetSocketAddress publishAddress = createPublishAddress(publishHost, publishPort);
-            profileBoundAddresses.put(name, new BoundTransportAddress(new InetSocketTransportAddress(boundAddress), new InetSocketTransportAddress(publishAddress)));
+            // TODO: support real multihoming with publishing. Today we use putIfAbsent so only the prioritized address is published
+            profileBoundAddresses.putIfAbsent(name, new BoundTransportAddress(new InetSocketTransportAddress(boundAddress), new InetSocketTransportAddress(publishAddress)));
         }
 
-        logger.debug("Bound profile [{}] to address [{}]", name, serverChannels.get(name).getLocalAddress());
+        logger.info("Bound profile [{}] to address [{}]", name, boundSocket.get());
     }
 
     private void createServerBootstrap(String name, Settings settings) {
@@ -500,15 +516,17 @@ public class NettyTransport extends AbstractLifecycleComponent implem
                         nodeChannels.close();
                     }
 
-                    Iterator> serverChannelIterator = serverChannels.entrySet().iterator();
+                    Iterator>> serverChannelIterator = serverChannels.entrySet().iterator();
                     while (serverChannelIterator.hasNext()) {
-                        Map.Entry serverChannelEntry = serverChannelIterator.next();
+                        Map.Entry> serverChannelEntry = serverChannelIterator.next();
                         String name = serverChannelEntry.getKey();
-                        Channel serverChannel = serverChannelEntry.getValue();
-                        try {
-                            serverChannel.close().awaitUninterruptibly();
-                        } catch (Throwable t) {
-                            logger.debug("Error closing serverChannel for profile [{}]", t, name);
+                        List serverChannels = serverChannelEntry.getValue();
+                        for (Channel serverChannel : serverChannels) {
+                            try {
+                                serverChannel.close().awaitUninterruptibly();
+                            } catch (Throwable t) {
+                                logger.debug("Error closing serverChannel for profile [{}]", t, name);
+                            }
                         }
                         serverChannelIterator.remove();
                     }
diff --git a/core/src/test/java/org/elasticsearch/common/network/NetworkUtilsTests.java b/core/src/test/java/org/elasticsearch/common/network/NetworkUtilsTests.java
new file mode 100644
index 00000000000..fdcaef3e193
--- /dev/null
+++ b/core/src/test/java/org/elasticsearch/common/network/NetworkUtilsTests.java
@@ -0,0 +1,77 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.elasticsearch.common.network;
+
+import org.elasticsearch.test.ESTestCase;
+
+import java.net.InetAddress;
+
+/**
+ * Tests for network utils. Please avoid using any methods that cause DNS lookups!
+ */
+public class NetworkUtilsTests extends ESTestCase {
+    
+    /**
+     * test sort key order respects PREFER_IPV4
+     */
+    public void testSortKey() throws Exception {
+        InetAddress localhostv4 = InetAddress.getByName("127.0.0.1");
+        InetAddress localhostv6 = InetAddress.getByName("::1");
+        assertTrue(NetworkUtils.sortKey(localhostv4, true) < NetworkUtils.sortKey(localhostv6, true));
+        assertTrue(NetworkUtils.sortKey(localhostv6, false) < NetworkUtils.sortKey(localhostv4, false));
+    }
+    
+    /**
+     * test ordinary addresses sort before private addresses
+     */
+    public void testSortKeySiteLocal() throws Exception {
+        InetAddress siteLocal = InetAddress.getByName("172.16.0.1");
+        assert siteLocal.isSiteLocalAddress();
+        InetAddress ordinary = InetAddress.getByName("192.192.192.192");
+        assertTrue(NetworkUtils.sortKey(ordinary, true) < NetworkUtils.sortKey(siteLocal, true));
+        assertTrue(NetworkUtils.sortKey(ordinary, false) < NetworkUtils.sortKey(siteLocal, false));
+        
+        InetAddress siteLocal6 = InetAddress.getByName("fec0::1");
+        assert siteLocal6.isSiteLocalAddress();
+        InetAddress ordinary6 = InetAddress.getByName("fddd::1");
+        assertTrue(NetworkUtils.sortKey(ordinary6, true) < NetworkUtils.sortKey(siteLocal6, true));
+        assertTrue(NetworkUtils.sortKey(ordinary6, false) < NetworkUtils.sortKey(siteLocal6, false));
+    }
+    
+    /**
+     * test private addresses sort before link local addresses
+     */
+    public void testSortKeyLinkLocal() throws Exception {
+        InetAddress linkLocal = InetAddress.getByName("fe80::1");
+        assert linkLocal.isLinkLocalAddress();
+        InetAddress ordinary = InetAddress.getByName("fddd::1");
+        assertTrue(NetworkUtils.sortKey(ordinary, true) < NetworkUtils.sortKey(linkLocal, true));
+        assertTrue(NetworkUtils.sortKey(ordinary, false) < NetworkUtils.sortKey(linkLocal, false));
+    }
+    
+    /**
+     * Test filtering out ipv4/ipv6 addresses
+     */
+    public void testFilter() throws Exception {
+        InetAddress addresses[] = { InetAddress.getByName("::1"), InetAddress.getByName("127.0.0.1") };
+        assertArrayEquals(new InetAddress[] { InetAddress.getByName("127.0.0.1") }, NetworkUtils.filterIPV4(addresses));
+        assertArrayEquals(new InetAddress[] { InetAddress.getByName("::1") }, NetworkUtils.filterIPV6(addresses));
+    }
+}
diff --git a/core/src/test/java/org/elasticsearch/test/InternalTestCluster.java b/core/src/test/java/org/elasticsearch/test/InternalTestCluster.java
index 8ec6f89e44b..9eaab6d8b4b 100644
--- a/core/src/test/java/org/elasticsearch/test/InternalTestCluster.java
+++ b/core/src/test/java/org/elasticsearch/test/InternalTestCluster.java
@@ -504,7 +504,7 @@ public final class InternalTestCluster extends TestCluster {
     public static String clusterName(String prefix, long clusterSeed) {
         StringBuilder builder = new StringBuilder(prefix);
         final int childVM = RandomizedTest.systemPropertyAsInt(SysGlobals.CHILDVM_SYSPROP_JVM_ID, 0);
-        builder.append('-').append(NetworkUtils.getLocalHostName("__default_host__"));
+        builder.append('-').append(NetworkUtils.getLocalHost().getHostName());
         builder.append("-CHILD_VM=[").append(childVM).append(']');
         builder.append("-CLUSTER_SEED=[").append(clusterSeed).append(']');
         // if multiple maven task run on a single host we better have an identifier that doesn't rely on input params
diff --git a/core/src/test/java/org/elasticsearch/transport/netty/NettyTransportMultiPortTests.java b/core/src/test/java/org/elasticsearch/transport/netty/NettyTransportMultiPortTests.java
index 1c4cac7078e..1a494de4931 100644
--- a/core/src/test/java/org/elasticsearch/transport/netty/NettyTransportMultiPortTests.java
+++ b/core/src/test/java/org/elasticsearch/transport/netty/NettyTransportMultiPortTests.java
@@ -135,29 +135,6 @@ public class NettyTransportMultiPortTests extends ESTestCase {
         }
     }
 
-    @Test
-    public void testThatBindingOnDifferentHostsWorks() throws Exception {
-        int[] ports = getRandomPorts(2);
-        InetAddress firstNonLoopbackAddress = NetworkUtils.getFirstNonLoopbackAddress(NetworkUtils.StackType.IPv4);
-        assumeTrue("No IP-v4 non-loopback address available - are you on a plane?", firstNonLoopbackAddress != null);
-        Settings settings = settingsBuilder()
-                .put("network.host", "127.0.0.1")
-                .put("transport.tcp.port", ports[0])
-                .put("transport.profiles.default.bind_host", "127.0.0.1")
-                .put("transport.profiles.client1.bind_host", firstNonLoopbackAddress.getHostAddress())
-                .put("transport.profiles.client1.port", ports[1])
-                .build();
-
-        ThreadPool threadPool = new ThreadPool("tst");
-        try (NettyTransport ignored = startNettyTransport(settings, threadPool)) {
-            assertPortIsBound("127.0.0.1", ports[0]);
-            assertPortIsBound(firstNonLoopbackAddress.getHostAddress(), ports[1]);
-            assertConnectionRefused(ports[1]);
-        } finally {
-            terminate(threadPool);
-        }
-    }
-
     @Test
     public void testThatProfileWithoutValidNameIsIgnored() throws Exception {
         int[] ports = getRandomPorts(3);
diff --git a/dev-tools/src/main/resources/ant/integration-tests.xml b/dev-tools/src/main/resources/ant/integration-tests.xml
index 3da39286400..d710e57a076 100644
--- a/dev-tools/src/main/resources/ant/integration-tests.xml
+++ b/dev-tools/src/main/resources/ant/integration-tests.xml
@@ -124,7 +124,7 @@
       
-        
+        
       
     
   
@@ -138,7 +138,7 @@
       
-        
+        
       
     
   
diff --git a/distribution/pom.xml b/distribution/pom.xml
index ca5b6e18fcb..cdfd311b9c6 100644
--- a/distribution/pom.xml
+++ b/distribution/pom.xml
@@ -153,7 +153,7 @@
                                 1
                                 
                                     
-                                    127.0.0.1:${integ.transport.port}
+                                    localhost:${integ.transport.port}
                                 
                             
                         
diff --git a/docs/reference/modules/discovery/zen.asciidoc b/docs/reference/modules/discovery/zen.asciidoc
index 7cca0175f3c..8f0bd1f1c50 100644
--- a/docs/reference/modules/discovery/zen.asciidoc
+++ b/docs/reference/modules/discovery/zen.asciidoc
@@ -38,7 +38,7 @@ respond to. It provides the following settings with the
 |`ttl` |The ttl of the multicast message. Defaults to `3`.
 
 |`address` |The address to bind to, defaults to `null` which means it
-will bind to all available network interfaces.
+will bind `network.bind_host`
 
 |`enabled` |Whether multicast ping discovery is enabled. Defaults to `true`.
 |=======================================================================
diff --git a/docs/reference/modules/network.asciidoc b/docs/reference/modules/network.asciidoc
index 955b3f129ca..e855d826a32 100644
--- a/docs/reference/modules/network.asciidoc
+++ b/docs/reference/modules/network.asciidoc
@@ -9,13 +9,15 @@ network settings allows to set common settings that will be shared among
 all network based modules (unless explicitly overridden in each module).
 
 The `network.bind_host` setting allows to control the host different network
-components will bind on. By default, the bind host will be `anyLoopbackAddress`
-(typically `127.0.0.1` or `::1`).
+components will bind on. By default, the bind host will be `_local_`
+(loopback addresses such as `127.0.0.1`, `::1`).
 
 The `network.publish_host` setting allows to control the host the node will
 publish itself within the cluster so other nodes will be able to connect to it.
-Of course, this can't be the `anyLocalAddress`, and by default, it will be the
-first loopback address (if possible), or the local address.
+Currently an elasticsearch node may be bound to multiple addresses, but only
+publishes one.  If not specified, this defaults to the "best" address from 
+`network.bind_host`.  By default, IPv4 addresses are preferred to IPv6, and 
+ordinary addresses are preferred to site-local or link-local addresses.
 
 The `network.host` setting is a simple setting to automatically set both
 `network.bind_host` and `network.publish_host` to the same host value.
@@ -27,21 +29,25 @@ in the following table:
 [cols="<,<",options="header",]
 |=======================================================================
 |Logical Host Setting Value |Description
-|`_local_` |Will be resolved to the local ip address.
+|`_local_` |Will be resolved to loopback addresses
 
-|`_non_loopback_` |The first non loopback address.
+|`_local:ipv4_` |Will be resolved to loopback IPv4 addresses
 
-|`_non_loopback:ipv4_` |The first non loopback IPv4 address.
+|`_local:ipv6_` |Will be resolved to loopback IPv6 addresses
 
-|`_non_loopback:ipv6_` |The first non loopback IPv6 address.
+|`_non_loopback_` |Addresses of the first non loopback interface
 
-|`_[networkInterface]_` |Resolves to the ip address of the provided
+|`_non_loopback:ipv4_` |IPv4 addresses of the first non loopback interface
+
+|`_non_loopback:ipv6_` |IPv6 addresses of the first non loopback interface
+
+|`_[networkInterface]_` |Resolves to the addresses of the provided
 network interface. For example `_en0_`.
 
-|`_[networkInterface]:ipv4_` |Resolves to the ipv4 address of the
+|`_[networkInterface]:ipv4_` |Resolves to the ipv4 addresses of the
 provided network interface. For example `_en0:ipv4_`.
 
-|`_[networkInterface]:ipv6_` |Resolves to the ipv6 address of the
+|`_[networkInterface]:ipv6_` |Resolves to the ipv6 addresses of the
 provided network interface. For example `_en0:ipv6_`.
 |=======================================================================
 
diff --git a/plugins/cloud-aws/src/main/java/org/elasticsearch/cloud/aws/network/Ec2NameResolver.java b/plugins/cloud-aws/src/main/java/org/elasticsearch/cloud/aws/network/Ec2NameResolver.java
index d0c055ffd46..337a97e13a1 100755
--- a/plugins/cloud-aws/src/main/java/org/elasticsearch/cloud/aws/network/Ec2NameResolver.java
+++ b/plugins/cloud-aws/src/main/java/org/elasticsearch/cloud/aws/network/Ec2NameResolver.java
@@ -93,7 +93,7 @@ public class Ec2NameResolver extends AbstractComponent implements CustomNameReso
      * @throws IOException if ec2 meta-data cannot be obtained.
      * @see CustomNameResolver#resolveIfPossible(String)
      */
-    public InetAddress resolve(Ec2HostnameType type, boolean warnOnFailure) {
+    public InetAddress[] resolve(Ec2HostnameType type, boolean warnOnFailure) {
         URLConnection urlConnection = null;
         InputStream in = null;
         try {
@@ -109,7 +109,8 @@ public class Ec2NameResolver extends AbstractComponent implements CustomNameReso
                 logger.error("no ec2 metadata returned from {}", url);
                 return null;
             }
-            return InetAddress.getByName(metadataResult);
+            // only one address: because we explicitly ask for only one via the Ec2HostnameType
+            return new InetAddress[] { InetAddress.getByName(metadataResult) };
         } catch (IOException e) {
             if (warnOnFailure) {
                 logger.warn("failed to get metadata for [" + type.configName + "]: " + ExceptionsHelper.detailedMessage(e));
@@ -123,13 +124,13 @@ public class Ec2NameResolver extends AbstractComponent implements CustomNameReso
     }
 
     @Override
-    public InetAddress resolveDefault() {
+    public InetAddress[] resolveDefault() {
         return null; // using this, one has to explicitly specify _ec2_ in network setting
 //        return resolve(Ec2HostnameType.DEFAULT, false);
     }
 
     @Override
-    public InetAddress resolveIfPossible(String value) {
+    public InetAddress[] resolveIfPossible(String value) {
         for (Ec2HostnameType type : Ec2HostnameType.values()) {
             if (type.configName.equals(value)) {
                 return resolve(type, true);
diff --git a/plugins/pom.xml b/plugins/pom.xml
index 78e3d65aa0d..d89a441e233 100644
--- a/plugins/pom.xml
+++ b/plugins/pom.xml
@@ -414,7 +414,7 @@
                                 1
                                 
                                     
-                                    127.0.0.1:${integ.transport.port}
+                                    localhost:${integ.transport.port}
                                 
                             
                         

From b2ba3847f73ace98d73ea63e82b44cf5be326add Mon Sep 17 00:00:00 2001
From: Nicholas Knize 
Date: Fri, 7 Aug 2015 15:55:22 -0500
Subject: [PATCH 02/39] Refactor geo_point validate* and normalize* options to
 ignore_malformed and coerce*

For consistency geo_point mapper's validate and normalize options are converted to ignore_malformed and coerced
---
 .../index/mapper/geo/GeoPointFieldMapper.java | 173 +++++++-----------
 .../query/GeoBoundingBoxQueryBuilder.java     |  18 ++
 .../query/GeoBoundingBoxQueryParser.java      |  35 +++-
 .../index/query/GeoDistanceQueryBuilder.java  |  20 ++
 .../index/query/GeoDistanceQueryParser.java   |  35 +++-
 .../query/GeoDistanceRangeQueryBuilder.java   |  20 ++
 .../query/GeoDistanceRangeQueryParser.java    |  35 +++-
 .../index/query/GeoPolygonQueryBuilder.java   |  20 ++
 .../index/query/GeoPolygonQueryParser.java    |  34 +++-
 .../search/sort/GeoDistanceSortBuilder.java   |  18 ++
 .../search/sort/GeoDistanceSortParser.java    |  33 +++-
 .../mapper/geo/GeoPointFieldMapperTests.java  | 156 +++++++++++++++-
 .../mapper/geo/GeoPointFieldTypeTests.java    |  10 +-
 .../index/search/geo/GeoUtilsTests.java       |  26 +--
 .../functionscore/DecayFunctionScoreIT.java   |   3 +-
 .../search/geo/GeoBoundingBoxIT.java          |  16 +-
 .../search/geo/GeoDistanceIT.java             |  67 ++++---
 .../query-dsl/geo-bounding-box-query.asciidoc |  19 ++
 .../query-dsl/geo-distance-query.asciidoc     |  13 ++
 .../geo-distance-range-query.asciidoc         |   2 +-
 .../query-dsl/geo-polygon-query.asciidoc      |  14 ++
 21 files changed, 550 insertions(+), 217 deletions(-)

diff --git a/core/src/main/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapper.java b/core/src/main/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapper.java
index 0458c410b5b..8addceff7e0 100644
--- a/core/src/main/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapper.java
+++ b/core/src/main/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapper.java
@@ -85,6 +85,8 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper
         public static final String LON_SUFFIX = "." + LON;
         public static final String GEOHASH = "geohash";
         public static final String GEOHASH_SUFFIX = "." + GEOHASH;
+        public static final String IGNORE_MALFORMED = "ignore_malformed";
+        public static final String COERCE = "coerce";
     }
 
     public static class Defaults {
@@ -93,10 +95,9 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper
         public static final boolean ENABLE_GEOHASH = false;
         public static final boolean ENABLE_GEOHASH_PREFIX = false;
         public static final int GEO_HASH_PRECISION = GeoHashUtils.PRECISION;
-        public static final boolean NORMALIZE_LAT = true;
-        public static final boolean NORMALIZE_LON = true;
-        public static final boolean VALIDATE_LAT = true;
-        public static final boolean VALIDATE_LON = true;
+
+        public static final boolean IGNORE_MALFORMED = false;
+        public static final boolean COERCE = false;
 
         public static final MappedFieldType FIELD_TYPE = new GeoPointFieldType();
 
@@ -215,6 +216,7 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper
         @Override
         public Mapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException {
             Builder builder = geoPointField(name);
+            final boolean indexCreatedBeforeV2_0 = parserContext.indexVersionCreated().before(Version.V_2_0_0);
             parseField(builder, name, node, parserContext);
             for (Iterator> iterator = node.entrySet().iterator(); iterator.hasNext();) {
                 Map.Entry entry = iterator.next();
@@ -245,25 +247,42 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper
                         builder.geoHashPrecision(GeoUtils.geoHashLevelsForPrecision(fieldNode.toString()));
                     }
                     iterator.remove();
-                } else if (fieldName.equals("validate")) {
-                    builder.fieldType().setValidateLat(XContentMapValues.nodeBooleanValue(fieldNode));
-                    builder.fieldType().setValidateLon(XContentMapValues.nodeBooleanValue(fieldNode));
+                } else if (fieldName.equals(Names.IGNORE_MALFORMED)) {
+                    if (builder.fieldType().coerce == false) {
+                        builder.fieldType().ignoreMalformed = XContentMapValues.nodeBooleanValue(fieldNode);
+                    }
                     iterator.remove();
-                } else if (fieldName.equals("validate_lon")) {
-                    builder.fieldType().setValidateLon(XContentMapValues.nodeBooleanValue(fieldNode));
+                } else if (indexCreatedBeforeV2_0 && fieldName.equals("validate")) {
+                    if (builder.fieldType().ignoreMalformed == false) {
+                        builder.fieldType().ignoreMalformed = !XContentMapValues.nodeBooleanValue(fieldNode);
+                    }
+                   iterator.remove();
+                } else if (indexCreatedBeforeV2_0 && fieldName.equals("validate_lon")) {
+                    if (builder.fieldType().ignoreMalformed() == false) {
+                        builder.fieldType().ignoreMalformed = !XContentMapValues.nodeBooleanValue(fieldNode);
+                    }
                     iterator.remove();
-                } else if (fieldName.equals("validate_lat")) {
-                    builder.fieldType().setValidateLat(XContentMapValues.nodeBooleanValue(fieldNode));
+                } else if (indexCreatedBeforeV2_0 && fieldName.equals("validate_lat")) {
+                    if (builder.fieldType().ignoreMalformed == false) {
+                        builder.fieldType().ignoreMalformed = !XContentMapValues.nodeBooleanValue(fieldNode);
+                    }
                     iterator.remove();
-                } else if (fieldName.equals("normalize")) {
-                    builder.fieldType().setNormalizeLat(XContentMapValues.nodeBooleanValue(fieldNode));
-                    builder.fieldType().setNormalizeLon(XContentMapValues.nodeBooleanValue(fieldNode));
+                } else if (fieldName.equals(Names.COERCE)) {
+                    builder.fieldType().coerce = XContentMapValues.nodeBooleanValue(fieldNode);
+                    if (builder.fieldType().coerce == true) {
+                        builder.fieldType().ignoreMalformed = true;
+                    }
                     iterator.remove();
-                } else if (fieldName.equals("normalize_lat")) {
-                    builder.fieldType().setNormalizeLat(XContentMapValues.nodeBooleanValue(fieldNode));
+                } else if (indexCreatedBeforeV2_0 && fieldName.equals("normalize")) {
+                    builder.fieldType().coerce = XContentMapValues.nodeBooleanValue(fieldNode);
                     iterator.remove();
-                } else if (fieldName.equals("normalize_lon")) {
-                    builder.fieldType().setNormalizeLon(XContentMapValues.nodeBooleanValue(fieldNode));
+                } else if (indexCreatedBeforeV2_0 && fieldName.equals("normalize_lat")) {
+                    builder.fieldType().coerce = XContentMapValues.nodeBooleanValue(fieldNode);
+                    iterator.remove();
+                } else if (indexCreatedBeforeV2_0 && fieldName.equals("normalize_lon")) {
+                    if (builder.fieldType().coerce == false) {
+                        builder.fieldType().coerce = XContentMapValues.nodeBooleanValue(fieldNode);
+                    }
                     iterator.remove();
                 } else if (parseMultiField(builder, name, parserContext, fieldName, fieldNode)) {
                     iterator.remove();
@@ -281,10 +300,8 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper
 
         private MappedFieldType latFieldType;
         private MappedFieldType lonFieldType;
-        private boolean validateLon = true;
-        private boolean validateLat = true;
-        private boolean normalizeLon = true;
-        private boolean normalizeLat = true;
+        private boolean ignoreMalformed = false;
+        private boolean coerce = false;
 
         public GeoPointFieldType() {}
 
@@ -295,10 +312,8 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper
             this.geohashPrefixEnabled = ref.geohashPrefixEnabled;
             this.latFieldType = ref.latFieldType; // copying ref is ok, this can never be modified
             this.lonFieldType = ref.lonFieldType; // copying ref is ok, this can never be modified
-            this.validateLon = ref.validateLon;
-            this.validateLat = ref.validateLat;
-            this.normalizeLon = ref.normalizeLon;
-            this.normalizeLat = ref.normalizeLat;
+            this.coerce = ref.coerce;
+            this.ignoreMalformed = ref.ignoreMalformed;
         }
 
         @Override
@@ -312,10 +327,8 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper
             GeoPointFieldType that = (GeoPointFieldType) o;
             return geohashPrecision == that.geohashPrecision &&
                 geohashPrefixEnabled == that.geohashPrefixEnabled &&
-                validateLon == that.validateLon &&
-                validateLat == that.validateLat &&
-                normalizeLon == that.normalizeLon &&
-                normalizeLat == that.normalizeLat &&
+                coerce == that.coerce &&
+                ignoreMalformed == that.ignoreMalformed &&
                 java.util.Objects.equals(geohashFieldType, that.geohashFieldType) &&
                 java.util.Objects.equals(latFieldType, that.latFieldType) &&
                 java.util.Objects.equals(lonFieldType, that.lonFieldType);
@@ -323,7 +336,8 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper
 
         @Override
         public int hashCode() {
-            return java.util.Objects.hash(super.hashCode(), geohashFieldType, geohashPrecision, geohashPrefixEnabled, latFieldType, lonFieldType, validateLon, validateLat, normalizeLon, normalizeLat);
+            return java.util.Objects.hash(super.hashCode(), geohashFieldType, geohashPrecision, geohashPrefixEnabled, latFieldType,
+                    lonFieldType, coerce, ignoreMalformed);
         }
 
         @Override
@@ -347,22 +361,10 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper
             if (isGeohashPrefixEnabled() != other.isGeohashPrefixEnabled()) {
                 conflicts.add("mapper [" + names().fullName() + "] has different geohash_prefix");
             }
-            if (normalizeLat() != other.normalizeLat()) {
-                conflicts.add("mapper [" + names().fullName() + "] has different normalize_lat");
-            }
-            if (normalizeLon() != other.normalizeLon()) {
-                conflicts.add("mapper [" + names().fullName() + "] has different normalize_lon");
-            }
-            if (isLatLonEnabled() &&
+            if (isLatLonEnabled() && other.isLatLonEnabled() &&
                 latFieldType().numericPrecisionStep() != other.latFieldType().numericPrecisionStep()) {
                 conflicts.add("mapper [" + names().fullName() + "] has different precision_step");
             }
-            if (validateLat() != other.validateLat()) {
-                conflicts.add("mapper [" + names().fullName() + "] has different validate_lat");
-            }
-            if (validateLon() != other.validateLon()) {
-                conflicts.add("mapper [" + names().fullName() + "] has different validate_lon");
-            }
         }
 
         public boolean isGeohashEnabled() {
@@ -406,40 +408,22 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper
             this.lonFieldType = lonFieldType;
         }
 
-        public boolean validateLon() {
-            return validateLon;
+        public boolean coerce() {
+            return this.coerce;
         }
 
-        public void setValidateLon(boolean validateLon) {
+        public void setCoerce(boolean coerce) {
             checkIfFrozen();
-            this.validateLon = validateLon;
+            this.coerce = coerce;
         }
 
-        public boolean validateLat() {
-            return validateLat;
+        public boolean ignoreMalformed() {
+            return this.ignoreMalformed;
         }
 
-        public void setValidateLat(boolean validateLat) {
+        public void setIgnoreMalformed(boolean ignoreMalformed) {
             checkIfFrozen();
-            this.validateLat = validateLat;
-        }
-
-        public boolean normalizeLon() {
-            return normalizeLon;
-        }
-
-        public void setNormalizeLon(boolean normalizeLon) {
-            checkIfFrozen();
-            this.normalizeLon = normalizeLon;
-        }
-
-        public boolean normalizeLat() {
-            return normalizeLat;
-        }
-
-        public void setNormalizeLat(boolean normalizeLat) {
-            checkIfFrozen();
-            this.normalizeLat = normalizeLat;
+            this.ignoreMalformed = ignoreMalformed;
         }
 
         @Override
@@ -586,7 +570,8 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper
     private final StringFieldMapper geohashMapper;
 
     public GeoPointFieldMapper(String simpleName, MappedFieldType fieldType, MappedFieldType defaultFieldType, Settings indexSettings,
-            ContentPath.Type pathType, DoubleFieldMapper latMapper, DoubleFieldMapper lonMapper, StringFieldMapper geohashMapper,MultiFields multiFields) {
+            ContentPath.Type pathType, DoubleFieldMapper latMapper, DoubleFieldMapper lonMapper, StringFieldMapper geohashMapper,
+            MultiFields multiFields) {
         super(simpleName, fieldType, defaultFieldType, indexSettings, multiFields, null);
         this.pathType = pathType;
         this.latMapper = latMapper;
@@ -680,21 +665,22 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper
     }
 
     private void parse(ParseContext context, GeoPoint point, String geohash) throws IOException {
-        if (fieldType().normalizeLat() || fieldType().normalizeLon()) {
-            GeoUtils.normalizePoint(point, fieldType().normalizeLat(), fieldType().normalizeLon());
-        }
-
-        if (fieldType().validateLat()) {
+        if (fieldType().ignoreMalformed == false) {
             if (point.lat() > 90.0 || point.lat() < -90.0) {
                 throw new IllegalArgumentException("illegal latitude value [" + point.lat() + "] for " + name());
             }
-        }
-        if (fieldType().validateLon()) {
             if (point.lon() > 180.0 || point.lon() < -180) {
                 throw new IllegalArgumentException("illegal longitude value [" + point.lon() + "] for " + name());
             }
         }
 
+        if (fieldType().coerce) {
+            // by setting coerce to false we are assuming all geopoints are already in a valid coordinate system
+            // thus this extra step can be skipped
+            // LUCENE WATCH: This will be folded back into Lucene's GeoPointField
+            GeoUtils.normalizePoint(point, true, true);
+        }
+
         if (fieldType().indexOptions() != IndexOptions.NONE || fieldType().stored()) {
             Field field = new Field(fieldType().names().indexName(), Double.toString(point.lat()) + ',' + Double.toString(point.lon()), fieldType());
             context.doc().add(field);
@@ -755,33 +741,11 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper
         if (fieldType().isLatLonEnabled() && (includeDefaults || fieldType().latFieldType().numericPrecisionStep() != NumericUtils.PRECISION_STEP_DEFAULT)) {
             builder.field("precision_step", fieldType().latFieldType().numericPrecisionStep());
         }
-        if (includeDefaults || fieldType().validateLat() != Defaults.VALIDATE_LAT || fieldType().validateLon() != Defaults.VALIDATE_LON) {
-            if (fieldType().validateLat() && fieldType().validateLon()) {
-                builder.field("validate", true);
-            } else if (!fieldType().validateLat() && !fieldType().validateLon()) {
-                builder.field("validate", false);
-            } else {
-                if (includeDefaults || fieldType().validateLat() != Defaults.VALIDATE_LAT) {
-                    builder.field("validate_lat", fieldType().validateLat());
-                }
-                if (includeDefaults || fieldType().validateLon() != Defaults.VALIDATE_LON) {
-                    builder.field("validate_lon", fieldType().validateLon());
-                }
-            }
+        if (includeDefaults || fieldType().coerce != Defaults.COERCE) {
+            builder.field(Names.COERCE, fieldType().coerce);
         }
-        if (includeDefaults || fieldType().normalizeLat() != Defaults.NORMALIZE_LAT || fieldType().normalizeLon() != Defaults.NORMALIZE_LON) {
-            if (fieldType().normalizeLat() && fieldType().normalizeLon()) {
-                builder.field("normalize", true);
-            } else if (!fieldType().normalizeLat() && !fieldType().normalizeLon()) {
-                builder.field("normalize", false);
-            } else {
-                if (includeDefaults || fieldType().normalizeLat() != Defaults.NORMALIZE_LAT) {
-                    builder.field("normalize_lat", fieldType().normalizeLat());
-                }
-                if (includeDefaults || fieldType().normalizeLon() != Defaults.NORMALIZE_LON) {
-                    builder.field("normalize_lon", fieldType().normalizeLon());
-                }
-            }
+        if (includeDefaults || fieldType().ignoreMalformed != Defaults.IGNORE_MALFORMED) {
+            builder.field(Names.IGNORE_MALFORMED, fieldType().ignoreMalformed);
         }
     }
 
@@ -812,5 +776,4 @@ public class GeoPointFieldMapper extends FieldMapper implements ArrayValueMapper
             return new BytesRef(bytes);
         }
     }
-
 }
diff --git a/core/src/main/java/org/elasticsearch/index/query/GeoBoundingBoxQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/GeoBoundingBoxQueryBuilder.java
index 9b376ca851e..99b348e9e55 100644
--- a/core/src/main/java/org/elasticsearch/index/query/GeoBoundingBoxQueryBuilder.java
+++ b/core/src/main/java/org/elasticsearch/index/query/GeoBoundingBoxQueryBuilder.java
@@ -41,6 +41,8 @@ public class GeoBoundingBoxQueryBuilder extends QueryBuilder {
 
     private String queryName;
     private String type;
+    private Boolean coerce;
+    private Boolean ignoreMalformed;
 
     public GeoBoundingBoxQueryBuilder(String name) {
         this.name = name;
@@ -134,6 +136,16 @@ public class GeoBoundingBoxQueryBuilder extends QueryBuilder {
         return this;
     }
 
+    public GeoBoundingBoxQueryBuilder coerce(boolean coerce) {
+        this.coerce = coerce;
+        return this;
+    }
+
+    public GeoBoundingBoxQueryBuilder ignoreMalformed(boolean ignoreMalformed) {
+        this.ignoreMalformed = ignoreMalformed;
+        return this;
+    }
+
     /**
      * Sets the type of executing of the geo bounding box. Can be either `memory` or `indexed`. Defaults
      * to `memory`.
@@ -169,6 +181,12 @@ public class GeoBoundingBoxQueryBuilder extends QueryBuilder {
         if (type != null) {
             builder.field("type", type);
         }
+        if (coerce != null) {
+            builder.field("coerce", coerce);
+        }
+        if (ignoreMalformed != null) {
+            builder.field("ignore_malformed", ignoreMalformed);
+        }
 
         builder.endObject();
     }
diff --git a/core/src/main/java/org/elasticsearch/index/query/GeoBoundingBoxQueryParser.java b/core/src/main/java/org/elasticsearch/index/query/GeoBoundingBoxQueryParser.java
index 89012571a71..6dead6e94d4 100644
--- a/core/src/main/java/org/elasticsearch/index/query/GeoBoundingBoxQueryParser.java
+++ b/core/src/main/java/org/elasticsearch/index/query/GeoBoundingBoxQueryParser.java
@@ -21,12 +21,12 @@ package org.elasticsearch.index.query;
 
 import org.apache.lucene.search.Query;
 import org.elasticsearch.ElasticsearchParseException;
+import org.elasticsearch.Version;
 import org.elasticsearch.common.geo.GeoPoint;
 import org.elasticsearch.common.geo.GeoUtils;
 import org.elasticsearch.common.inject.Inject;
 import org.elasticsearch.common.xcontent.XContentParser;
 import org.elasticsearch.index.fielddata.IndexGeoPointFieldData;
-import org.elasticsearch.index.mapper.FieldMapper;
 import org.elasticsearch.index.mapper.MappedFieldType;
 import org.elasticsearch.index.mapper.geo.GeoPointFieldMapper;
 import org.elasticsearch.index.search.geo.InMemoryGeoBoundingBoxQuery;
@@ -81,7 +81,9 @@ public class GeoBoundingBoxQueryParser implements QueryParser {
         String queryName = null;
         String currentFieldName = null;
         XContentParser.Token token;
-        boolean normalize = true;
+        final boolean indexCreatedBeforeV2_0 = parseContext.indexVersionCreated().before(Version.V_2_0_0);
+        boolean coerce = false;
+        boolean ignoreMalformed = false;
 
         GeoPoint sparse = new GeoPoint();
         
@@ -137,10 +139,15 @@ public class GeoBoundingBoxQueryParser implements QueryParser {
             } else if (token.isValue()) {
                 if ("_name".equals(currentFieldName)) {
                     queryName = parser.text();
-                } else if ("normalize".equals(currentFieldName)) {
-                    normalize = parser.booleanValue();
+                } else if ("coerce".equals(currentFieldName) || (indexCreatedBeforeV2_0 && "normalize".equals(currentFieldName))) {
+                    coerce = parser.booleanValue();
+                    if (coerce == true) {
+                        ignoreMalformed = true;
+                    }
                 } else if ("type".equals(currentFieldName)) {
                     type = parser.text();
+                } else if ("ignore_malformed".equals(currentFieldName) && coerce == false) {
+                    ignoreMalformed = parser.booleanValue();
                 } else {
                     throw new QueryParsingException(parseContext, "failed to parse [{}] query. unexpected field [{}]", NAME, currentFieldName);
                 }
@@ -150,8 +157,24 @@ public class GeoBoundingBoxQueryParser implements QueryParser {
         final GeoPoint topLeft = sparse.reset(top, left);  //just keep the object
         final GeoPoint bottomRight = new GeoPoint(bottom, right);
 
-        if (normalize) {
-            // Special case: if the difference bettween the left and right is 360 and the right is greater than the left, we are asking for 
+        // validation was not available prior to 2.x, so to support bwc percolation queries we only ignore_malformed on 2.x created indexes
+        if (!indexCreatedBeforeV2_0 && !ignoreMalformed) {
+            if (topLeft.lat() > 90.0 || topLeft.lat() < -90.0) {
+                throw new QueryParsingException(parseContext, "illegal latitude value [{}] for [{}]", topLeft.lat(), NAME);
+            }
+            if (topLeft.lon() > 180.0 || topLeft.lon() < -180) {
+                throw new QueryParsingException(parseContext, "illegal longitude value [{}] for [{}]", topLeft.lon(), NAME);
+            }
+            if (bottomRight.lat() > 90.0 || bottomRight.lat() < -90.0) {
+                throw new QueryParsingException(parseContext, "illegal latitude value [{}] for [{}]", bottomRight.lat(), NAME);
+            }
+            if (bottomRight.lon() > 180.0 || bottomRight.lon() < -180) {
+                throw new QueryParsingException(parseContext, "illegal longitude value [{}] for [{}]", bottomRight.lon(), NAME);
+            }
+        }
+
+        if (coerce) {
+            // Special case: if the difference between the left and right is 360 and the right is greater than the left, we are asking for
             // the complete longitude range so need to set longitude to the complete longditude range
             boolean completeLonRange = ((right - left) % 360 == 0 && right > left);
             GeoUtils.normalizePoint(topLeft, true, !completeLonRange);
diff --git a/core/src/main/java/org/elasticsearch/index/query/GeoDistanceQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/GeoDistanceQueryBuilder.java
index 0995a5ecacf..77c8f944864 100644
--- a/core/src/main/java/org/elasticsearch/index/query/GeoDistanceQueryBuilder.java
+++ b/core/src/main/java/org/elasticsearch/index/query/GeoDistanceQueryBuilder.java
@@ -44,6 +44,10 @@ public class GeoDistanceQueryBuilder extends QueryBuilder {
 
     private String queryName;
 
+    private Boolean coerce;
+
+    private Boolean ignoreMalformed;
+
     public GeoDistanceQueryBuilder(String name) {
         this.name = name;
     }
@@ -97,6 +101,16 @@ public class GeoDistanceQueryBuilder extends QueryBuilder {
         return this;
     }
 
+    public GeoDistanceQueryBuilder coerce(boolean coerce) {
+        this.coerce = coerce;
+        return this;
+    }
+
+    public GeoDistanceQueryBuilder ignoreMalformed(boolean ignoreMalformed) {
+        this.ignoreMalformed = ignoreMalformed;
+        return this;
+    }
+
     @Override
     protected void doXContent(XContentBuilder builder, Params params) throws IOException {
         builder.startObject(GeoDistanceQueryParser.NAME);
@@ -115,6 +129,12 @@ public class GeoDistanceQueryBuilder extends QueryBuilder {
         if (queryName != null) {
             builder.field("_name", queryName);
         }
+        if (coerce != null) {
+            builder.field("coerce", coerce);
+        }
+        if (ignoreMalformed != null) {
+            builder.field("ignore_malformed", ignoreMalformed);
+        }
         builder.endObject();
     }
 }
diff --git a/core/src/main/java/org/elasticsearch/index/query/GeoDistanceQueryParser.java b/core/src/main/java/org/elasticsearch/index/query/GeoDistanceQueryParser.java
index b78562567ec..82013818810 100644
--- a/core/src/main/java/org/elasticsearch/index/query/GeoDistanceQueryParser.java
+++ b/core/src/main/java/org/elasticsearch/index/query/GeoDistanceQueryParser.java
@@ -20,6 +20,7 @@
 package org.elasticsearch.index.query;
 
 import org.apache.lucene.search.Query;
+import org.elasticsearch.Version;
 import org.elasticsearch.common.geo.GeoDistance;
 import org.elasticsearch.common.geo.GeoHashUtils;
 import org.elasticsearch.common.geo.GeoPoint;
@@ -28,7 +29,6 @@ import org.elasticsearch.common.inject.Inject;
 import org.elasticsearch.common.unit.DistanceUnit;
 import org.elasticsearch.common.xcontent.XContentParser;
 import org.elasticsearch.index.fielddata.IndexGeoPointFieldData;
-import org.elasticsearch.index.mapper.FieldMapper;
 import org.elasticsearch.index.mapper.MappedFieldType;
 import org.elasticsearch.index.mapper.geo.GeoPointFieldMapper;
 import org.elasticsearch.index.search.geo.GeoDistanceRangeQuery;
@@ -71,8 +71,9 @@ public class GeoDistanceQueryParser implements QueryParser {
         DistanceUnit unit = DistanceUnit.DEFAULT;
         GeoDistance geoDistance = GeoDistance.DEFAULT;
         String optimizeBbox = "memory";
-        boolean normalizeLon = true;
-        boolean normalizeLat = true;
+        final boolean indexCreatedBeforeV2_0 = parseContext.indexVersionCreated().before(Version.V_2_0_0);
+        boolean coerce = false;
+        boolean ignoreMalformed = false;
         while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {
             if (token == XContentParser.Token.FIELD_NAME) {
                 currentFieldName = parser.currentName();
@@ -125,9 +126,13 @@ public class GeoDistanceQueryParser implements QueryParser {
                     queryName = parser.text();
                 } else if ("optimize_bbox".equals(currentFieldName) || "optimizeBbox".equals(currentFieldName)) {
                     optimizeBbox = parser.textOrNull();
-                } else if ("normalize".equals(currentFieldName)) {
-                    normalizeLat = parser.booleanValue();
-                    normalizeLon = parser.booleanValue();
+                } else if ("coerce".equals(currentFieldName) || (indexCreatedBeforeV2_0 && "normalize".equals(currentFieldName))) {
+                    coerce = parser.booleanValue();
+                    if (coerce == true) {
+                        ignoreMalformed = true;
+                    }
+                } else if ("ignore_malformed".equals(currentFieldName) && coerce == false) {
+                    ignoreMalformed = parser.booleanValue();
                 } else {
                     point.resetFromString(parser.text());
                     fieldName = currentFieldName;
@@ -135,6 +140,20 @@ public class GeoDistanceQueryParser implements QueryParser {
             }
         }
 
+        // validation was not available prior to 2.x, so to support bwc percolation queries we only ignore_malformed on 2.x created indexes
+        if (!indexCreatedBeforeV2_0 && !ignoreMalformed) {
+            if (point.lat() > 90.0 || point.lat() < -90.0) {
+                throw new QueryParsingException(parseContext, "illegal latitude value [{}] for [{}]", point.lat(), NAME);
+            }
+            if (point.lon() > 180.0 || point.lon() < -180) {
+                throw new QueryParsingException(parseContext, "illegal longitude value [{}] for [{}]", point.lon(), NAME);
+            }
+        }
+
+        if (coerce) {
+            GeoUtils.normalizePoint(point, coerce, coerce);
+        }
+
         if (vDistance == null) {
             throw new QueryParsingException(parseContext, "geo_distance requires 'distance' to be specified");
         } else if (vDistance instanceof Number) {
@@ -144,10 +163,6 @@ public class GeoDistanceQueryParser implements QueryParser {
         }
         distance = geoDistance.normalize(distance, DistanceUnit.DEFAULT);
 
-        if (normalizeLat || normalizeLon) {
-            GeoUtils.normalizePoint(point, normalizeLat, normalizeLon);
-        }
-
         MappedFieldType fieldType = parseContext.fieldMapper(fieldName);
         if (fieldType == null) {
             throw new QueryParsingException(parseContext, "failed to find geo_point field [" + fieldName + "]");
diff --git a/core/src/main/java/org/elasticsearch/index/query/GeoDistanceRangeQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/GeoDistanceRangeQueryBuilder.java
index d21810f97e3..6aa6f0fd9d7 100644
--- a/core/src/main/java/org/elasticsearch/index/query/GeoDistanceRangeQueryBuilder.java
+++ b/core/src/main/java/org/elasticsearch/index/query/GeoDistanceRangeQueryBuilder.java
@@ -46,6 +46,10 @@ public class GeoDistanceRangeQueryBuilder extends QueryBuilder {
 
     private String optimizeBbox;
 
+    private Boolean coerce;
+
+    private Boolean ignoreMalformed;
+
     public GeoDistanceRangeQueryBuilder(String name) {
         this.name = name;
     }
@@ -125,6 +129,16 @@ public class GeoDistanceRangeQueryBuilder extends QueryBuilder {
         return this;
     }
 
+    public GeoDistanceRangeQueryBuilder coerce(boolean coerce) {
+        this.coerce = coerce;
+        return this;
+    }
+
+    public GeoDistanceRangeQueryBuilder ignoreMalformed(boolean ignoreMalformed) {
+        this.ignoreMalformed = ignoreMalformed;
+        return this;
+    }
+
     /**
      * Sets the filter name for the filter that can be used when searching for matched_filters per hit.
      */
@@ -154,6 +168,12 @@ public class GeoDistanceRangeQueryBuilder extends QueryBuilder {
         if (queryName != null) {
             builder.field("_name", queryName);
         }
+        if (coerce != null) {
+            builder.field("coerce", coerce);
+        }
+        if (ignoreMalformed != null) {
+            builder.field("ignore_malformed", ignoreMalformed);
+        }
         builder.endObject();
     }
 }
diff --git a/core/src/main/java/org/elasticsearch/index/query/GeoDistanceRangeQueryParser.java b/core/src/main/java/org/elasticsearch/index/query/GeoDistanceRangeQueryParser.java
index 6c8479bee82..f60d9447e7e 100644
--- a/core/src/main/java/org/elasticsearch/index/query/GeoDistanceRangeQueryParser.java
+++ b/core/src/main/java/org/elasticsearch/index/query/GeoDistanceRangeQueryParser.java
@@ -20,6 +20,7 @@
 package org.elasticsearch.index.query;
 
 import org.apache.lucene.search.Query;
+import org.elasticsearch.Version;
 import org.elasticsearch.common.geo.GeoDistance;
 import org.elasticsearch.common.geo.GeoHashUtils;
 import org.elasticsearch.common.geo.GeoPoint;
@@ -28,7 +29,6 @@ import org.elasticsearch.common.inject.Inject;
 import org.elasticsearch.common.unit.DistanceUnit;
 import org.elasticsearch.common.xcontent.XContentParser;
 import org.elasticsearch.index.fielddata.IndexGeoPointFieldData;
-import org.elasticsearch.index.mapper.FieldMapper;
 import org.elasticsearch.index.mapper.MappedFieldType;
 import org.elasticsearch.index.mapper.geo.GeoPointFieldMapper;
 import org.elasticsearch.index.search.geo.GeoDistanceRangeQuery;
@@ -73,8 +73,9 @@ public class GeoDistanceRangeQueryParser implements QueryParser {
         DistanceUnit unit = DistanceUnit.DEFAULT;
         GeoDistance geoDistance = GeoDistance.DEFAULT;
         String optimizeBbox = "memory";
-        boolean normalizeLon = true;
-        boolean normalizeLat = true;
+        final boolean indexCreatedBeforeV2_0 = parseContext.indexVersionCreated().before(Version.V_2_0_0);
+        boolean coerce = false;
+        boolean ignoreMalformed = false;
         while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {
             if (token == XContentParser.Token.FIELD_NAME) {
                 currentFieldName = parser.currentName();
@@ -155,9 +156,13 @@ public class GeoDistanceRangeQueryParser implements QueryParser {
                     queryName = parser.text();
                 } else if ("optimize_bbox".equals(currentFieldName) || "optimizeBbox".equals(currentFieldName)) {
                     optimizeBbox = parser.textOrNull();
-                } else if ("normalize".equals(currentFieldName)) {
-                    normalizeLat = parser.booleanValue();
-                    normalizeLon = parser.booleanValue();
+                } else if ("coerce".equals(currentFieldName) || (indexCreatedBeforeV2_0 && "normalize".equals(currentFieldName))) {
+                    coerce = parser.booleanValue();
+                    if (coerce == true) {
+                        ignoreMalformed = true;
+                    }
+                } else if ("ignore_malformed".equals(currentFieldName) && coerce == false) {
+                    ignoreMalformed = parser.booleanValue();
                 } else {
                     point.resetFromString(parser.text());
                     fieldName = currentFieldName;
@@ -165,6 +170,20 @@ public class GeoDistanceRangeQueryParser implements QueryParser {
             }
         }
 
+        // validation was not available prior to 2.x, so to support bwc percolation queries we only ignore_malformed on 2.x created indexes
+        if (!indexCreatedBeforeV2_0 && !ignoreMalformed) {
+            if (point.lat() > 90.0 || point.lat() < -90.0) {
+                throw new QueryParsingException(parseContext, "illegal latitude value [{}] for [{}]", point.lat(), NAME);
+            }
+            if (point.lon() > 180.0 || point.lon() < -180) {
+                throw new QueryParsingException(parseContext, "illegal longitude value [{}] for [{}]", point.lon(), NAME);
+            }
+        }
+
+        if (coerce) {
+            GeoUtils.normalizePoint(point, coerce, coerce);
+        }
+
         Double from = null;
         Double to = null;
         if (vFrom != null) {
@@ -184,10 +203,6 @@ public class GeoDistanceRangeQueryParser implements QueryParser {
             to = geoDistance.normalize(to, DistanceUnit.DEFAULT);
         }
 
-        if (normalizeLat || normalizeLon) {
-            GeoUtils.normalizePoint(point, normalizeLat, normalizeLon);
-        }
-
         MappedFieldType fieldType = parseContext.fieldMapper(fieldName);
         if (fieldType == null) {
             throw new QueryParsingException(parseContext, "failed to find geo_point field [" + fieldName + "]");
diff --git a/core/src/main/java/org/elasticsearch/index/query/GeoPolygonQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/GeoPolygonQueryBuilder.java
index 4fd2f4153d9..27d14ecb137 100644
--- a/core/src/main/java/org/elasticsearch/index/query/GeoPolygonQueryBuilder.java
+++ b/core/src/main/java/org/elasticsearch/index/query/GeoPolygonQueryBuilder.java
@@ -38,6 +38,10 @@ public class GeoPolygonQueryBuilder extends QueryBuilder {
 
     private String queryName;
 
+    private Boolean coerce;
+
+    private Boolean ignoreMalformed;
+
     public GeoPolygonQueryBuilder(String name) {
         this.name = name;
     }
@@ -70,6 +74,16 @@ public class GeoPolygonQueryBuilder extends QueryBuilder {
         return this;
     }
 
+    public GeoPolygonQueryBuilder coerce(boolean coerce) {
+        this.coerce = coerce;
+        return this;
+    }
+
+    public GeoPolygonQueryBuilder ignoreMalformed(boolean ignoreMalformed) {
+        this.ignoreMalformed = ignoreMalformed;
+        return this;
+    }
+
     @Override
     protected void doXContent(XContentBuilder builder, Params params) throws IOException {
         builder.startObject(GeoPolygonQueryParser.NAME);
@@ -85,6 +99,12 @@ public class GeoPolygonQueryBuilder extends QueryBuilder {
         if (queryName != null) {
             builder.field("_name", queryName);
         }
+        if (coerce != null) {
+            builder.field("coerce", coerce);
+        }
+        if (ignoreMalformed != null) {
+            builder.field("ignore_malformed", ignoreMalformed);
+        }
 
         builder.endObject();
     }
diff --git a/core/src/main/java/org/elasticsearch/index/query/GeoPolygonQueryParser.java b/core/src/main/java/org/elasticsearch/index/query/GeoPolygonQueryParser.java
index 43d368666e1..00401bc8ca8 100644
--- a/core/src/main/java/org/elasticsearch/index/query/GeoPolygonQueryParser.java
+++ b/core/src/main/java/org/elasticsearch/index/query/GeoPolygonQueryParser.java
@@ -22,13 +22,13 @@ package org.elasticsearch.index.query;
 import com.google.common.collect.Lists;
 
 import org.apache.lucene.search.Query;
+import org.elasticsearch.Version;
 import org.elasticsearch.common.geo.GeoPoint;
 import org.elasticsearch.common.geo.GeoUtils;
 import org.elasticsearch.common.inject.Inject;
 import org.elasticsearch.common.xcontent.XContentParser;
 import org.elasticsearch.common.xcontent.XContentParser.Token;
 import org.elasticsearch.index.fielddata.IndexGeoPointFieldData;
-import org.elasticsearch.index.mapper.FieldMapper;
 import org.elasticsearch.index.mapper.MappedFieldType;
 import org.elasticsearch.index.mapper.geo.GeoPointFieldMapper;
 import org.elasticsearch.index.search.geo.GeoPolygonQuery;
@@ -70,9 +70,9 @@ public class GeoPolygonQueryParser implements QueryParser {
 
         List shell = Lists.newArrayList();
 
-        boolean normalizeLon = true;
-        boolean normalizeLat = true;
-
+        final boolean indexCreatedBeforeV2_0 = parseContext.indexVersionCreated().before(Version.V_2_0_0);
+        boolean coerce = false;
+        boolean ignoreMalformed = false;
         String queryName = null;
         String currentFieldName = null;
         XContentParser.Token token;
@@ -108,9 +108,13 @@ public class GeoPolygonQueryParser implements QueryParser {
             } else if (token.isValue()) {
                 if ("_name".equals(currentFieldName)) {
                     queryName = parser.text();
-                } else if ("normalize".equals(currentFieldName)) {
-                    normalizeLat = parser.booleanValue();
-                    normalizeLon = parser.booleanValue();
+                } else if ("coerce".equals(currentFieldName) || (indexCreatedBeforeV2_0 && "normalize".equals(currentFieldName))) {
+                    coerce = parser.booleanValue();
+                    if (coerce == true) {
+                        ignoreMalformed = true;
+                    }
+                } else if ("ignore_malformed".equals(currentFieldName) && coerce == false) {
+                    ignoreMalformed = parser.booleanValue();
                 } else {
                     throw new QueryParsingException(parseContext, "[geo_polygon] query does not support [" + currentFieldName + "]");
                 }
@@ -134,9 +138,21 @@ public class GeoPolygonQueryParser implements QueryParser {
             }
         }
 
-        if (normalizeLat || normalizeLon) {
+        // validation was not available prior to 2.x, so to support bwc percolation queries we only ignore_malformed on 2.x created indexes
+        if (!indexCreatedBeforeV2_0 && !ignoreMalformed) {
             for (GeoPoint point : shell) {
-                GeoUtils.normalizePoint(point, normalizeLat, normalizeLon);
+                if (point.lat() > 90.0 || point.lat() < -90.0) {
+                    throw new QueryParsingException(parseContext, "illegal latitude value [{}] for [{}]", point.lat(), NAME);
+                }
+                if (point.lon() > 180.0 || point.lon() < -180) {
+                    throw new QueryParsingException(parseContext, "illegal longitude value [{}] for [{}]", point.lon(), NAME);
+                }
+            }
+        }
+
+        if (coerce) {
+            for (GeoPoint point : shell) {
+                GeoUtils.normalizePoint(point, coerce, coerce);
             }
         }
 
diff --git a/core/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortBuilder.java b/core/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortBuilder.java
index 5a128f44e02..8f502e53058 100644
--- a/core/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortBuilder.java
+++ b/core/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortBuilder.java
@@ -47,6 +47,8 @@ public class GeoDistanceSortBuilder extends SortBuilder {
     private String sortMode;
     private QueryBuilder nestedFilter;
     private String nestedPath;
+    private Boolean coerce;
+    private Boolean ignoreMalformed;
 
     /**
      * Constructs a new distance based sort on a geo point like field.
@@ -146,6 +148,16 @@ public class GeoDistanceSortBuilder extends SortBuilder {
         return this;
     }
 
+    public GeoDistanceSortBuilder coerce(boolean coerce) {
+        this.coerce = coerce;
+        return this;
+    }
+
+    public GeoDistanceSortBuilder ignoreMalformed(boolean ignoreMalformed) {
+        this.ignoreMalformed = ignoreMalformed;
+        return this;
+    }
+
     @Override
     public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
         builder.startObject("_geo_distance");
@@ -181,6 +193,12 @@ public class GeoDistanceSortBuilder extends SortBuilder {
         if (nestedFilter != null) {
             builder.field("nested_filter", nestedFilter, params);
         }
+        if (coerce != null) {
+            builder.field("coerce", coerce);
+        }
+        if (ignoreMalformed != null) {
+            builder.field("ignore_malformed", ignoreMalformed);
+        }
 
         builder.endObject();
         return builder;
diff --git a/core/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortParser.java b/core/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortParser.java
index e7941a41d13..6f4a0dfbb4a 100644
--- a/core/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortParser.java
+++ b/core/src/main/java/org/elasticsearch/search/sort/GeoDistanceSortParser.java
@@ -29,6 +29,7 @@ import org.apache.lucene.search.SortField;
 import org.apache.lucene.search.join.BitDocIdSetFilter;
 import org.apache.lucene.util.BitSet;
 import org.elasticsearch.ElasticsearchParseException;
+import org.elasticsearch.Version;
 import org.elasticsearch.common.geo.GeoDistance;
 import org.elasticsearch.common.geo.GeoDistance.FixedSourceDistance;
 import org.elasticsearch.common.geo.GeoPoint;
@@ -42,7 +43,6 @@ import org.elasticsearch.index.fielddata.IndexGeoPointFieldData;
 import org.elasticsearch.index.fielddata.MultiGeoPointValues;
 import org.elasticsearch.index.fielddata.NumericDoubleValues;
 import org.elasticsearch.index.fielddata.SortedNumericDoubleValues;
-import org.elasticsearch.index.mapper.FieldMapper;
 import org.elasticsearch.index.mapper.MappedFieldType;
 import org.elasticsearch.index.mapper.object.ObjectMapper;
 import org.elasticsearch.index.query.support.NestedInnerQueryParseSupport;
@@ -73,8 +73,9 @@ public class GeoDistanceSortParser implements SortParser {
         MultiValueMode sortMode = null;
         NestedInnerQueryParseSupport nestedHelper = null;
 
-        boolean normalizeLon = true;
-        boolean normalizeLat = true;
+        final boolean indexCreatedBeforeV2_0 = context.queryParserService().getIndexCreatedVersion().before(Version.V_2_0_0);
+        boolean coerce = false;
+        boolean ignoreMalformed = false;
 
         XContentParser.Token token;
         String currentName = parser.currentName();
@@ -107,9 +108,13 @@ public class GeoDistanceSortParser implements SortParser {
                     unit = DistanceUnit.fromString(parser.text());
                 } else if (currentName.equals("distance_type") || currentName.equals("distanceType")) {
                     geoDistance = GeoDistance.fromString(parser.text());
-                } else if ("normalize".equals(currentName)) {
-                    normalizeLat = parser.booleanValue();
-                    normalizeLon = parser.booleanValue();
+                } else if ("coerce".equals(currentName) || (indexCreatedBeforeV2_0 && "normalize".equals(currentName))) {
+                    coerce = parser.booleanValue();
+                    if (coerce == true) {
+                        ignoreMalformed = true;
+                    }
+                } else if ("ignore_malformed".equals(currentName) && coerce == false) {
+                    ignoreMalformed = parser.booleanValue();
                 } else if ("sort_mode".equals(currentName) || "sortMode".equals(currentName) || "mode".equals(currentName)) {
                     sortMode = MultiValueMode.fromString(parser.text());
                 } else if ("nested_path".equals(currentName) || "nestedPath".equals(currentName)) {
@@ -126,9 +131,21 @@ public class GeoDistanceSortParser implements SortParser {
             }
         }
 
-        if (normalizeLat || normalizeLon) {
+        // validation was not available prior to 2.x, so to support bwc percolation queries we only ignore_malformed on 2.x created indexes
+        if (!indexCreatedBeforeV2_0 && !ignoreMalformed) {
             for (GeoPoint point : geoPoints) {
-                GeoUtils.normalizePoint(point, normalizeLat, normalizeLon);
+                if (point.lat() > 90.0 || point.lat() < -90.0) {
+                    throw new ElasticsearchParseException("illegal latitude value [{}] for [GeoDistanceSort]", point.lat());
+                }
+                if (point.lon() > 180.0 || point.lon() < -180) {
+                    throw new ElasticsearchParseException("illegal longitude value [{}] for [GeoDistanceSort]", point.lon());
+                }
+            }
+        }
+
+        if (coerce) {
+            for (GeoPoint point : geoPoints) {
+                GeoUtils.normalizePoint(point, coerce, coerce);
             }
         }
 
diff --git a/core/src/test/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapperTests.java b/core/src/test/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapperTests.java
index 2584e861643..1ca1c3a2837 100644
--- a/core/src/test/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapperTests.java
+++ b/core/src/test/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapperTests.java
@@ -18,7 +18,10 @@
  */
 package org.elasticsearch.index.mapper.geo;
 
+import org.elasticsearch.Version;
+import org.elasticsearch.cluster.metadata.IndexMetaData;
 import org.elasticsearch.common.geo.GeoHashUtils;
+import org.elasticsearch.common.settings.Settings;
 import org.elasticsearch.common.xcontent.XContentFactory;
 import org.elasticsearch.index.mapper.DocumentMapper;
 import org.elasticsearch.index.mapper.DocumentMapperParser;
@@ -26,6 +29,7 @@ import org.elasticsearch.index.mapper.MapperParsingException;
 import org.elasticsearch.index.mapper.MergeResult;
 import org.elasticsearch.index.mapper.ParsedDocument;
 import org.elasticsearch.test.ESSingleNodeTestCase;
+import org.elasticsearch.test.VersionUtils;
 import org.junit.Test;
 
 import java.util.ArrayList;
@@ -138,7 +142,8 @@ public class GeoPointFieldMapperTests extends ESSingleNodeTestCase {
     public void testNormalizeLatLonValuesDefault() throws Exception {
         // default to normalize
         String mapping = XContentFactory.jsonBuilder().startObject().startObject("type")
-                .startObject("properties").startObject("point").field("type", "geo_point").endObject().endObject()
+                .startObject("properties").startObject("point").field("type", "geo_point").field("coerce", true)
+                        .field("ignore_malformed", true).endObject().endObject()
                 .endObject().endObject().string();
 
         DocumentMapper defaultMapper = createIndex("test").mapperService().documentMapperParser().parse(mapping);
@@ -171,7 +176,8 @@ public class GeoPointFieldMapperTests extends ESSingleNodeTestCase {
     @Test
     public void testValidateLatLonValues() throws Exception {
         String mapping = XContentFactory.jsonBuilder().startObject().startObject("type")
-                .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("normalize", false).field("validate", true).endObject().endObject()
+                .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("coerce", false)
+                .field("ignore_malformed", false).endObject().endObject()
                 .endObject().endObject().string();
 
         DocumentMapper defaultMapper = createIndex("test").mapperService().documentMapperParser().parse(mapping);
@@ -231,7 +237,8 @@ public class GeoPointFieldMapperTests extends ESSingleNodeTestCase {
     @Test
     public void testNoValidateLatLonValues() throws Exception {
         String mapping = XContentFactory.jsonBuilder().startObject().startObject("type")
-                .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("normalize", false).field("validate", false).endObject().endObject()
+                .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("coerce", false)
+                .field("ignore_malformed", true).endObject().endObject()
                 .endObject().endObject().string();
 
         DocumentMapper defaultMapper = createIndex("test").mapperService().documentMapperParser().parse(mapping);
@@ -472,30 +479,161 @@ public class GeoPointFieldMapperTests extends ESSingleNodeTestCase {
         assertThat(doc.rootDoc().getFields("point")[1].stringValue(), equalTo("1.4,1.5"));
     }
 
+
+    /**
+     * Test that expected exceptions are thrown when creating a new index with deprecated options
+     */
+    @Test
+    public void testOptionDeprecation() throws Exception {
+        DocumentMapperParser parser = createIndex("test").mapperService().documentMapperParser();
+        // test deprecation exceptions on newly created indexes
+        try {
+            String validateMapping = XContentFactory.jsonBuilder().startObject().startObject("type")
+                    .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true)
+                    .field("validate", true).endObject().endObject()
+                    .endObject().endObject().string();
+            parser.parse(validateMapping);
+            fail("process completed successfully when " + MapperParsingException.class.getName() + " expected");
+        } catch (MapperParsingException e) {
+            assertEquals(e.getMessage(), "Mapping definition for [point] has unsupported parameters:  [validate : true]");
+        }
+
+        try {
+            String validateMapping = XContentFactory.jsonBuilder().startObject().startObject("type")
+                    .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true)
+                    .field("validate_lat", true).endObject().endObject()
+                    .endObject().endObject().string();
+            parser.parse(validateMapping);
+            fail("process completed successfully when " + MapperParsingException.class.getName() + " expected");
+        } catch (MapperParsingException e) {
+            assertEquals(e.getMessage(), "Mapping definition for [point] has unsupported parameters:  [validate_lat : true]");
+        }
+
+        try {
+            String validateMapping = XContentFactory.jsonBuilder().startObject().startObject("type")
+                    .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true)
+                    .field("validate_lon", true).endObject().endObject()
+                    .endObject().endObject().string();
+            parser.parse(validateMapping);
+            fail("process completed successfully when " + MapperParsingException.class.getName() + " expected");
+        } catch (MapperParsingException e) {
+            assertEquals(e.getMessage(), "Mapping definition for [point] has unsupported parameters:  [validate_lon : true]");
+        }
+
+        // test deprecated normalize
+        try {
+            String normalizeMapping = XContentFactory.jsonBuilder().startObject().startObject("type")
+                    .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true)
+                    .field("normalize", true).endObject().endObject()
+                    .endObject().endObject().string();
+            parser.parse(normalizeMapping);
+            fail("process completed successfully when " + MapperParsingException.class.getName() + " expected");
+        } catch (MapperParsingException e) {
+            assertEquals(e.getMessage(), "Mapping definition for [point] has unsupported parameters:  [normalize : true]");
+        }
+
+        try {
+            String normalizeMapping = XContentFactory.jsonBuilder().startObject().startObject("type")
+                    .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true)
+                    .field("normalize_lat", true).endObject().endObject()
+                    .endObject().endObject().string();
+            parser.parse(normalizeMapping);
+            fail("process completed successfully when " + MapperParsingException.class.getName() + " expected");
+        } catch (MapperParsingException e) {
+            assertEquals(e.getMessage(), "Mapping definition for [point] has unsupported parameters:  [normalize_lat : true]");
+        }
+
+        try {
+            String normalizeMapping = XContentFactory.jsonBuilder().startObject().startObject("type")
+                    .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true)
+                    .field("normalize_lon", true).endObject().endObject()
+                    .endObject().endObject().string();
+            parser.parse(normalizeMapping);
+            fail("process completed successfully when " + MapperParsingException.class.getName() + " expected");
+        } catch (MapperParsingException e) {
+            assertEquals(e.getMessage(), "Mapping definition for [point] has unsupported parameters:  [normalize_lon : true]");
+        }
+    }
+
+    /**
+     * Test backward compatibility
+     */
+    @Test
+    public void testBackwardCompatibleOptions() throws Exception {
+        // backward compatibility testing
+        Settings settings = Settings.settingsBuilder().put(IndexMetaData.SETTING_VERSION_CREATED, VersionUtils.randomVersionBetween(random(), Version.V_1_0_0,
+                Version.V_1_7_1)).build();
+
+        // validate
+        DocumentMapperParser parser = createIndex("test", settings).mapperService().documentMapperParser();
+        String mapping = XContentFactory.jsonBuilder().startObject().startObject("type")
+                .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true)
+                .field("validate", false).endObject().endObject()
+                .endObject().endObject().string();
+        parser.parse(mapping);
+        assertThat(parser.parse(mapping).mapping().toString(), containsString("\"ignore_malformed\":true"));
+
+        mapping = XContentFactory.jsonBuilder().startObject().startObject("type")
+                .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true)
+                .field("validate_lat", false).endObject().endObject()
+                .endObject().endObject().string();
+        parser.parse(mapping);
+        assertThat(parser.parse(mapping).mapping().toString(), containsString("\"ignore_malformed\":true"));
+
+        mapping = XContentFactory.jsonBuilder().startObject().startObject("type")
+                .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true)
+                .field("validate_lon", false).endObject().endObject()
+                .endObject().endObject().string();
+        parser.parse(mapping);
+        assertThat(parser.parse(mapping).mapping().toString(), containsString("\"ignore_malformed\":true"));
+
+        // normalize
+        mapping = XContentFactory.jsonBuilder().startObject().startObject("type")
+                .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true)
+                .field("normalize", true).endObject().endObject()
+                .endObject().endObject().string();
+        parser.parse(mapping);
+        assertThat(parser.parse(mapping).mapping().toString(), containsString("\"coerce\":true"));
+
+        mapping = XContentFactory.jsonBuilder().startObject().startObject("type")
+                .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true)
+                .field("normalize_lat", true).endObject().endObject()
+                .endObject().endObject().string();
+        parser.parse(mapping);
+        assertThat(parser.parse(mapping).mapping().toString(), containsString("\"coerce\":true"));
+
+        mapping = XContentFactory.jsonBuilder().startObject().startObject("type")
+                .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true)
+                .field("normalize_lon", true).endObject().endObject()
+                .endObject().endObject().string();
+        parser.parse(mapping);
+        assertThat(parser.parse(mapping).mapping().toString(), containsString("\"coerce\":true"));
+    }
+
     @Test
     public void testGeoPointMapperMerge() throws Exception {
         String stage1Mapping = XContentFactory.jsonBuilder().startObject().startObject("type")
                 .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true)
-                .field("validate", true).endObject().endObject()
+                .field("ignore_malformed", true).endObject().endObject()
                 .endObject().endObject().string();
         DocumentMapperParser parser = createIndex("test").mapperService().documentMapperParser();
         DocumentMapper stage1 = parser.parse(stage1Mapping);
         String stage2Mapping = XContentFactory.jsonBuilder().startObject().startObject("type")
-                .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true)
-                .field("validate", false).endObject().endObject()
+                .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", false).field("geohash", true)
+                .field("ignore_malformed", false).endObject().endObject()
                 .endObject().endObject().string();
         DocumentMapper stage2 = parser.parse(stage2Mapping);
 
         MergeResult mergeResult = stage1.merge(stage2.mapping(), false, false);
         assertThat(mergeResult.hasConflicts(), equalTo(true));
-        assertThat(mergeResult.buildConflicts().length, equalTo(2));
+        assertThat(mergeResult.buildConflicts().length, equalTo(1));
         // todo better way of checking conflict?
-        assertThat("mapper [point] has different validate_lat", isIn(new ArrayList<>(Arrays.asList(mergeResult.buildConflicts()))));
+        assertThat("mapper [point] has different lat_lon", isIn(new ArrayList<>(Arrays.asList(mergeResult.buildConflicts()))));
 
         // correct mapping and ensure no failures
         stage2Mapping = XContentFactory.jsonBuilder().startObject().startObject("type")
                 .startObject("properties").startObject("point").field("type", "geo_point").field("lat_lon", true).field("geohash", true)
-                .field("validate", true).field("normalize", true).endObject().endObject()
+                .field("ignore_malformed", true).endObject().endObject()
                 .endObject().endObject().string();
         stage2 = parser.parse(stage2Mapping);
         mergeResult = stage1.merge(stage2.mapping(), false, false);
diff --git a/core/src/test/java/org/elasticsearch/index/mapper/geo/GeoPointFieldTypeTests.java b/core/src/test/java/org/elasticsearch/index/mapper/geo/GeoPointFieldTypeTests.java
index 8254e0b8bec..07a769faa61 100644
--- a/core/src/test/java/org/elasticsearch/index/mapper/geo/GeoPointFieldTypeTests.java
+++ b/core/src/test/java/org/elasticsearch/index/mapper/geo/GeoPointFieldTypeTests.java
@@ -31,7 +31,7 @@ public class GeoPointFieldTypeTests extends FieldTypeTestCase {
 
     @Override
     protected int numProperties() {
-        return 6 + super.numProperties();
+        return 4 + super.numProperties();
     }
 
     @Override
@@ -40,11 +40,9 @@ public class GeoPointFieldTypeTests extends FieldTypeTestCase {
         switch (propNum) {
             case 0: gft.setGeohashEnabled(new StringFieldMapper.StringFieldType(), 1, true); break;
             case 1: gft.setLatLonEnabled(new DoubleFieldMapper.DoubleFieldType(), new DoubleFieldMapper.DoubleFieldType()); break;
-            case 2: gft.setValidateLon(!gft.validateLon()); break;
-            case 3: gft.setValidateLat(!gft.validateLat()); break;
-            case 4: gft.setNormalizeLon(!gft.normalizeLon()); break;
-            case 5: gft.setNormalizeLat(!gft.normalizeLat()); break;
-            default: super.modifyProperty(ft, propNum - 6);
+            case 2: gft.setIgnoreMalformed(!gft.ignoreMalformed()); break;
+            case 3: gft.setCoerce(!gft.coerce()); break;
+            default: super.modifyProperty(ft, propNum - 4);
         }
     }
 }
diff --git a/core/src/test/java/org/elasticsearch/index/search/geo/GeoUtilsTests.java b/core/src/test/java/org/elasticsearch/index/search/geo/GeoUtilsTests.java
index cd9783448eb..326b144dc0c 100644
--- a/core/src/test/java/org/elasticsearch/index/search/geo/GeoUtilsTests.java
+++ b/core/src/test/java/org/elasticsearch/index/search/geo/GeoUtilsTests.java
@@ -339,34 +339,28 @@ public class GeoUtilsTests extends ESTestCase {
     @Test
     public void testNormalizePoint_outsideNormalRange_withOptions() {
         for (int i = 0; i < 100; i++) {
-            boolean normLat = randomBoolean();
-            boolean normLon = randomBoolean();
+            boolean normalize = randomBoolean();
             double normalisedLat = (randomDouble() * 180.0) - 90.0;
             double normalisedLon = (randomDouble() * 360.0) - 180.0;
-            int shiftLat = randomIntBetween(1, 10000);
-            int shiftLon = randomIntBetween(1, 10000);
-            double testLat = normalisedLat + (180.0 * shiftLat);
-            double testLon = normalisedLon + (360.0 * shiftLon);
+            int shift = randomIntBetween(1, 10000);
+            double testLat = normalisedLat + (180.0 * shift);
+            double testLon = normalisedLon + (360.0 * shift);
 
             double expectedLat;
             double expectedLon;
-            if (normLat) {
-                expectedLat = normalisedLat * (shiftLat % 2 == 0 ? 1 : -1);
-            } else {
-                expectedLat = testLat;
-            }
-            if (normLon) {
-                expectedLon = normalisedLon + ((normLat && shiftLat % 2 == 1) ? 180 : 0);
+            if (normalize) {
+                expectedLat = normalisedLat * (shift % 2 == 0 ? 1 : -1);
+                expectedLon = normalisedLon + ((shift % 2 == 1) ? 180 : 0);
                 if (expectedLon > 180.0) {
                     expectedLon -= 360;
                 }
             } else {
-                double shiftValue = normalisedLon > 0 ? -180 : 180;
-                expectedLon = testLon + ((normLat && shiftLat % 2 == 1) ? shiftValue : 0);
+                expectedLat = testLat;
+                expectedLon = testLon;
             }
             GeoPoint testPoint = new GeoPoint(testLat, testLon);
             GeoPoint expectedPoint = new GeoPoint(expectedLat, expectedLon);
-            GeoUtils.normalizePoint(testPoint, normLat, normLon);
+            GeoUtils.normalizePoint(testPoint, normalize, normalize);
             assertThat("Unexpected Latitude", testPoint.lat(), closeTo(expectedPoint.lat(), MAX_ACCEPTABLE_ERROR));
             assertThat("Unexpected Longitude", testPoint.lon(), closeTo(expectedPoint.lon(), MAX_ACCEPTABLE_ERROR));
         }
diff --git a/core/src/test/java/org/elasticsearch/search/functionscore/DecayFunctionScoreIT.java b/core/src/test/java/org/elasticsearch/search/functionscore/DecayFunctionScoreIT.java
index 0848c89ac3e..2d2d72822b8 100644
--- a/core/src/test/java/org/elasticsearch/search/functionscore/DecayFunctionScoreIT.java
+++ b/core/src/test/java/org/elasticsearch/search/functionscore/DecayFunctionScoreIT.java
@@ -574,7 +574,8 @@ public class DecayFunctionScoreIT extends ESIntegTestCase {
                 "type",
                 jsonBuilder().startObject().startObject("type").startObject("properties").startObject("test").field("type", "string")
                         .endObject().startObject("date").field("type", "date").endObject().startObject("num").field("type", "double")
-                        .endObject().startObject("geo").field("type", "geo_point").endObject().endObject().endObject().endObject()));
+                        .endObject().startObject("geo").field("type", "geo_point").field("coerce", true).endObject().endObject()
+                        .endObject().endObject()));
         ensureYellow();
         int numDocs = 200;
         List indexBuilders = new ArrayList<>();
diff --git a/core/src/test/java/org/elasticsearch/search/geo/GeoBoundingBoxIT.java b/core/src/test/java/org/elasticsearch/search/geo/GeoBoundingBoxIT.java
index cc8471519db..ef82b3b39df 100644
--- a/core/src/test/java/org/elasticsearch/search/geo/GeoBoundingBoxIT.java
+++ b/core/src/test/java/org/elasticsearch/search/geo/GeoBoundingBoxIT.java
@@ -289,50 +289,50 @@ public class GeoBoundingBoxIT extends ESIntegTestCase {
         SearchResponse searchResponse = client().prepareSearch()
                 .setQuery(
                         filteredQuery(matchAllQuery(),
-                                geoBoundingBoxQuery("location").topLeft(50, -180).bottomRight(-50, 180))
+                                geoBoundingBoxQuery("location").coerce(true).topLeft(50, -180).bottomRight(-50, 180))
                 ).execute().actionGet();
         assertThat(searchResponse.getHits().totalHits(), equalTo(1l));
         searchResponse = client().prepareSearch()
                 .setQuery(
                         filteredQuery(matchAllQuery(),
-                                geoBoundingBoxQuery("location").topLeft(50, -180).bottomRight(-50, 180).type("indexed"))
+                                geoBoundingBoxQuery("location").coerce(true).topLeft(50, -180).bottomRight(-50, 180).type("indexed"))
                 ).execute().actionGet();
         assertThat(searchResponse.getHits().totalHits(), equalTo(1l));
         searchResponse = client().prepareSearch()
                 .setQuery(
                         filteredQuery(matchAllQuery(),
-                                geoBoundingBoxQuery("location").topLeft(90, -180).bottomRight(-90, 180))
+                                geoBoundingBoxQuery("location").coerce(true).topLeft(90, -180).bottomRight(-90, 180))
                 ).execute().actionGet();
         assertThat(searchResponse.getHits().totalHits(), equalTo(2l));
         searchResponse = client().prepareSearch()
                 .setQuery(
                         filteredQuery(matchAllQuery(),
-                                geoBoundingBoxQuery("location").topLeft(90, -180).bottomRight(-90, 180).type("indexed"))
+                                geoBoundingBoxQuery("location").coerce(true).topLeft(90, -180).bottomRight(-90, 180).type("indexed"))
                 ).execute().actionGet();
         assertThat(searchResponse.getHits().totalHits(), equalTo(2l));
 
         searchResponse = client().prepareSearch()
                 .setQuery(
                         filteredQuery(matchAllQuery(),
-                                geoBoundingBoxQuery("location").topLeft(50, 0).bottomRight(-50, 360))
+                                geoBoundingBoxQuery("location").coerce(true).topLeft(50, 0).bottomRight(-50, 360))
                 ).execute().actionGet();
         assertThat(searchResponse.getHits().totalHits(), equalTo(1l));
         searchResponse = client().prepareSearch()
                 .setQuery(
                         filteredQuery(matchAllQuery(),
-                                geoBoundingBoxQuery("location").topLeft(50, 0).bottomRight(-50, 360).type("indexed"))
+                                geoBoundingBoxQuery("location").coerce(true).topLeft(50, 0).bottomRight(-50, 360).type("indexed"))
                 ).execute().actionGet();
         assertThat(searchResponse.getHits().totalHits(), equalTo(1l));
         searchResponse = client().prepareSearch()
                 .setQuery(
                         filteredQuery(matchAllQuery(),
-                                geoBoundingBoxQuery("location").topLeft(90, 0).bottomRight(-90, 360))
+                                geoBoundingBoxQuery("location").coerce(true).topLeft(90, 0).bottomRight(-90, 360))
                 ).execute().actionGet();
         assertThat(searchResponse.getHits().totalHits(), equalTo(2l));
         searchResponse = client().prepareSearch()
                 .setQuery(
                         filteredQuery(matchAllQuery(),
-                                geoBoundingBoxQuery("location").topLeft(90, 0).bottomRight(-90, 360).type("indexed"))
+                                geoBoundingBoxQuery("location").coerce(true).topLeft(90, 0).bottomRight(-90, 360).type("indexed"))
                 ).execute().actionGet();
         assertThat(searchResponse.getHits().totalHits(), equalTo(2l));
     }
diff --git a/core/src/test/java/org/elasticsearch/search/geo/GeoDistanceIT.java b/core/src/test/java/org/elasticsearch/search/geo/GeoDistanceIT.java
index b1e959a5821..69e6ac7df0b 100644
--- a/core/src/test/java/org/elasticsearch/search/geo/GeoDistanceIT.java
+++ b/core/src/test/java/org/elasticsearch/search/geo/GeoDistanceIT.java
@@ -221,8 +221,8 @@ public class GeoDistanceIT extends ESIntegTestCase {
     public void testDistanceSortingMVFields() throws Exception {
         XContentBuilder xContentBuilder = XContentFactory.jsonBuilder().startObject().startObject("type1")
                 .startObject("properties").startObject("locations").field("type", "geo_point").field("lat_lon", true)
-                .startObject("fielddata").field("format", randomNumericFieldDataFormat()).endObject().endObject().endObject()
-                .endObject().endObject();
+                .field("ignore_malformed", true).field("coerce", true).startObject("fielddata")
+                .field("format", randomNumericFieldDataFormat()).endObject().endObject().endObject().endObject().endObject();
         assertAcked(prepareCreate("test")
                 .addMapping("type1", xContentBuilder));
         ensureGreen();
@@ -233,6 +233,11 @@ public class GeoDistanceIT extends ESIntegTestCase {
                 .endObject()).execute().actionGet();
 
         client().prepareIndex("test", "type1", "2").setSource(jsonBuilder().startObject()
+                .field("names", "New York 2")
+                .startObject("locations").field("lat", 400.7143528).field("lon", 285.9990269).endObject()
+                .endObject()).execute().actionGet();
+
+        client().prepareIndex("test", "type1", "3").setSource(jsonBuilder().startObject()
                 .field("names", "Times Square", "Tribeca")
                 .startArray("locations")
                         // to NY: 5.286 km
@@ -242,7 +247,7 @@ public class GeoDistanceIT extends ESIntegTestCase {
                 .endArray()
                 .endObject()).execute().actionGet();
 
-        client().prepareIndex("test", "type1", "3").setSource(jsonBuilder().startObject()
+        client().prepareIndex("test", "type1", "4").setSource(jsonBuilder().startObject()
                 .field("names", "Wall Street", "Soho")
                 .startArray("locations")
                         // to NY: 1.055 km
@@ -253,7 +258,7 @@ public class GeoDistanceIT extends ESIntegTestCase {
                 .endObject()).execute().actionGet();
 
 
-        client().prepareIndex("test", "type1", "4").setSource(jsonBuilder().startObject()
+        client().prepareIndex("test", "type1", "5").setSource(jsonBuilder().startObject()
                 .field("names", "Greenwich Village", "Brooklyn")
                 .startArray("locations")
                         // to NY: 2.029 km
@@ -270,70 +275,76 @@ public class GeoDistanceIT extends ESIntegTestCase {
                 .addSort(SortBuilders.geoDistanceSort("locations").point(40.7143528, -74.0059731).order(SortOrder.ASC))
                 .execute().actionGet();
 
-        assertHitCount(searchResponse, 4);
-        assertOrderedSearchHits(searchResponse, "1", "2", "3", "4");
+        assertHitCount(searchResponse, 5);
+        assertOrderedSearchHits(searchResponse, "1", "2", "3", "4", "5");
         assertThat(((Number) searchResponse.getHits().getAt(0).sortValues()[0]).doubleValue(), closeTo(0d, 10d));
-        assertThat(((Number) searchResponse.getHits().getAt(1).sortValues()[0]).doubleValue(), closeTo(462.1d, 10d));
-        assertThat(((Number) searchResponse.getHits().getAt(2).sortValues()[0]).doubleValue(), closeTo(1055.0d, 10d));
-        assertThat(((Number) searchResponse.getHits().getAt(3).sortValues()[0]).doubleValue(), closeTo(2029.0d, 10d));
+        assertThat(((Number) searchResponse.getHits().getAt(1).sortValues()[0]).doubleValue(), closeTo(421.2d, 10d));
+        assertThat(((Number) searchResponse.getHits().getAt(2).sortValues()[0]).doubleValue(), closeTo(462.1d, 10d));
+        assertThat(((Number) searchResponse.getHits().getAt(3).sortValues()[0]).doubleValue(), closeTo(1055.0d, 10d));
+        assertThat(((Number) searchResponse.getHits().getAt(4).sortValues()[0]).doubleValue(), closeTo(2029.0d, 10d));
 
         // Order: Asc, Mode: max
         searchResponse = client().prepareSearch("test").setQuery(matchAllQuery())
                 .addSort(SortBuilders.geoDistanceSort("locations").point(40.7143528, -74.0059731).order(SortOrder.ASC).sortMode("max"))
                 .execute().actionGet();
 
-        assertHitCount(searchResponse, 4);
-        assertOrderedSearchHits(searchResponse, "1", "3", "2", "4");
+        assertHitCount(searchResponse, 5);
+        assertOrderedSearchHits(searchResponse, "1", "2",  "4", "3", "5");
         assertThat(((Number) searchResponse.getHits().getAt(0).sortValues()[0]).doubleValue(), closeTo(0d, 10d));
-        assertThat(((Number) searchResponse.getHits().getAt(1).sortValues()[0]).doubleValue(), closeTo(1258.0d, 10d));
-        assertThat(((Number) searchResponse.getHits().getAt(2).sortValues()[0]).doubleValue(), closeTo(5286.0d, 10d));
-        assertThat(((Number) searchResponse.getHits().getAt(3).sortValues()[0]).doubleValue(), closeTo(8572.0d, 10d));
+        assertThat(((Number) searchResponse.getHits().getAt(1).sortValues()[0]).doubleValue(), closeTo(421.2d, 10d));
+        assertThat(((Number) searchResponse.getHits().getAt(2).sortValues()[0]).doubleValue(), closeTo(1258.0d, 10d));
+        assertThat(((Number) searchResponse.getHits().getAt(3).sortValues()[0]).doubleValue(), closeTo(5286.0d, 10d));
+        assertThat(((Number) searchResponse.getHits().getAt(4).sortValues()[0]).doubleValue(), closeTo(8572.0d, 10d));
 
         // Order: Desc
         searchResponse = client().prepareSearch("test").setQuery(matchAllQuery())
                 .addSort(SortBuilders.geoDistanceSort("locations").point(40.7143528, -74.0059731).order(SortOrder.DESC))
                 .execute().actionGet();
 
-        assertHitCount(searchResponse, 4);
-        assertOrderedSearchHits(searchResponse, "4", "2", "3", "1");
+        assertHitCount(searchResponse, 5);
+        assertOrderedSearchHits(searchResponse, "5", "3", "4", "2", "1");
         assertThat(((Number) searchResponse.getHits().getAt(0).sortValues()[0]).doubleValue(), closeTo(8572.0d, 10d));
         assertThat(((Number) searchResponse.getHits().getAt(1).sortValues()[0]).doubleValue(), closeTo(5286.0d, 10d));
         assertThat(((Number) searchResponse.getHits().getAt(2).sortValues()[0]).doubleValue(), closeTo(1258.0d, 10d));
-        assertThat(((Number) searchResponse.getHits().getAt(3).sortValues()[0]).doubleValue(), closeTo(0d, 10d));
+        assertThat(((Number) searchResponse.getHits().getAt(3).sortValues()[0]).doubleValue(), closeTo(421.2d, 10d));
+        assertThat(((Number) searchResponse.getHits().getAt(4).sortValues()[0]).doubleValue(), closeTo(0d, 10d));
 
         // Order: Desc, Mode: min
         searchResponse = client().prepareSearch("test").setQuery(matchAllQuery())
                 .addSort(SortBuilders.geoDistanceSort("locations").point(40.7143528, -74.0059731).order(SortOrder.DESC).sortMode("min"))
                 .execute().actionGet();
 
-        assertHitCount(searchResponse, 4);
-        assertOrderedSearchHits(searchResponse, "4", "3", "2", "1");
+        assertHitCount(searchResponse, 5);
+        assertOrderedSearchHits(searchResponse, "5", "4", "3", "2", "1");
         assertThat(((Number) searchResponse.getHits().getAt(0).sortValues()[0]).doubleValue(), closeTo(2029.0d, 10d));
         assertThat(((Number) searchResponse.getHits().getAt(1).sortValues()[0]).doubleValue(), closeTo(1055.0d, 10d));
         assertThat(((Number) searchResponse.getHits().getAt(2).sortValues()[0]).doubleValue(), closeTo(462.1d, 10d));
-        assertThat(((Number) searchResponse.getHits().getAt(3).sortValues()[0]).doubleValue(), closeTo(0d, 10d));
+        assertThat(((Number) searchResponse.getHits().getAt(3).sortValues()[0]).doubleValue(), closeTo(421.2d, 10d));
+        assertThat(((Number) searchResponse.getHits().getAt(4).sortValues()[0]).doubleValue(), closeTo(0d, 10d));
 
         searchResponse = client().prepareSearch("test").setQuery(matchAllQuery())
                 .addSort(SortBuilders.geoDistanceSort("locations").point(40.7143528, -74.0059731).sortMode("avg").order(SortOrder.ASC))
                 .execute().actionGet();
 
-        assertHitCount(searchResponse, 4);
-        assertOrderedSearchHits(searchResponse, "1", "3", "2", "4");
+        assertHitCount(searchResponse, 5);
+        assertOrderedSearchHits(searchResponse, "1", "2", "4", "3", "5");
         assertThat(((Number) searchResponse.getHits().getAt(0).sortValues()[0]).doubleValue(), closeTo(0d, 10d));
-        assertThat(((Number) searchResponse.getHits().getAt(1).sortValues()[0]).doubleValue(), closeTo(1157d, 10d));
-        assertThat(((Number) searchResponse.getHits().getAt(2).sortValues()[0]).doubleValue(), closeTo(2874d, 10d));
-        assertThat(((Number) searchResponse.getHits().getAt(3).sortValues()[0]).doubleValue(), closeTo(5301d, 10d));
+        assertThat(((Number) searchResponse.getHits().getAt(1).sortValues()[0]).doubleValue(), closeTo(421.2d, 10d));
+        assertThat(((Number) searchResponse.getHits().getAt(2).sortValues()[0]).doubleValue(), closeTo(1157d, 10d));
+        assertThat(((Number) searchResponse.getHits().getAt(3).sortValues()[0]).doubleValue(), closeTo(2874d, 10d));
+        assertThat(((Number) searchResponse.getHits().getAt(4).sortValues()[0]).doubleValue(), closeTo(5301d, 10d));
 
         searchResponse = client().prepareSearch("test").setQuery(matchAllQuery())
                 .addSort(SortBuilders.geoDistanceSort("locations").point(40.7143528, -74.0059731).sortMode("avg").order(SortOrder.DESC))
                 .execute().actionGet();
 
-        assertHitCount(searchResponse, 4);
-        assertOrderedSearchHits(searchResponse, "4", "2", "3", "1");
+        assertHitCount(searchResponse, 5);
+        assertOrderedSearchHits(searchResponse, "5", "3", "4", "2", "1");
         assertThat(((Number) searchResponse.getHits().getAt(0).sortValues()[0]).doubleValue(), closeTo(5301.0d, 10d));
         assertThat(((Number) searchResponse.getHits().getAt(1).sortValues()[0]).doubleValue(), closeTo(2874.0d, 10d));
         assertThat(((Number) searchResponse.getHits().getAt(2).sortValues()[0]).doubleValue(), closeTo(1157.0d, 10d));
-        assertThat(((Number) searchResponse.getHits().getAt(3).sortValues()[0]).doubleValue(), closeTo(0d, 10d));
+        assertThat(((Number) searchResponse.getHits().getAt(3).sortValues()[0]).doubleValue(), closeTo(421.2d, 10d));
+        assertThat(((Number) searchResponse.getHits().getAt(4).sortValues()[0]).doubleValue(), closeTo(0d, 10d));
 
         assertFailures(client().prepareSearch("test").setQuery(matchAllQuery())
                 .addSort(SortBuilders.geoDistanceSort("locations").point(40.7143528, -74.0059731).sortMode("sum")),
diff --git a/docs/reference/query-dsl/geo-bounding-box-query.asciidoc b/docs/reference/query-dsl/geo-bounding-box-query.asciidoc
index e2d404b69e1..ea14c58e212 100644
--- a/docs/reference/query-dsl/geo-bounding-box-query.asciidoc
+++ b/docs/reference/query-dsl/geo-bounding-box-query.asciidoc
@@ -44,6 +44,25 @@ Then the following simple query can be executed with a
 }
 --------------------------------------------------
 
+[float]
+==== Query Options
+
+[cols="<,<",options="header",]
+|=======================================================================
+|Option |Description
+|`_name` |Optional name field to identify the filter
+
+|`coerce` |Set to `true` to normalize longitude and latitude values to a
+standard -180:180 / -90:90 coordinate system. (default is `false`).
+
+|`ignore_malformed` |Set to `true` to
+accept geo points with invalid latitude or longitude (default is `false`).
+
+|`type` |Set to one of `indexed` or `memory` to defines whether this filter will
+be executed in memory or indexed. See <> below for further details
+Default is `memory`.
+|=======================================================================
+
 [float]
 ==== Accepted Formats
 
diff --git a/docs/reference/query-dsl/geo-distance-query.asciidoc b/docs/reference/query-dsl/geo-distance-query.asciidoc
index a0f0d3163c0..130319d951f 100644
--- a/docs/reference/query-dsl/geo-distance-query.asciidoc
+++ b/docs/reference/query-dsl/geo-distance-query.asciidoc
@@ -158,6 +158,19 @@ The following are options allowed on the filter:
     sure the `geo_point` type index lat lon in this case), or `none` which
     disables bounding box optimization.
 
+`_name`::
+
+    Optional name field to identify the query
+
+`coerce`::
+
+    Set to `true` to normalize longitude and latitude values to a standard -180:180 / -90:90
+    coordinate system. (default is `false`).
+
+`ignore_malformed`::
+
+    Set to `true` to accept geo points with invalid latitude or
+    longitude (default is `false`).
 
 [float]
 ==== geo_point Type
diff --git a/docs/reference/query-dsl/geo-distance-range-query.asciidoc b/docs/reference/query-dsl/geo-distance-range-query.asciidoc
index 855159bd3a2..cacf0a7a9cb 100644
--- a/docs/reference/query-dsl/geo-distance-range-query.asciidoc
+++ b/docs/reference/query-dsl/geo-distance-range-query.asciidoc
@@ -24,7 +24,7 @@ Filters documents that exists within a range from a specific point:
 }
 --------------------------------------------------
 
-Supports the same point location parameter as the
+Supports the same point location parameter and query options as the
 <>
 filter. And also support the common parameters for range (lt, lte, gt,
 gte, from, to, include_upper and include_lower).
diff --git a/docs/reference/query-dsl/geo-polygon-query.asciidoc b/docs/reference/query-dsl/geo-polygon-query.asciidoc
index 6c28a05b445..d778999c251 100644
--- a/docs/reference/query-dsl/geo-polygon-query.asciidoc
+++ b/docs/reference/query-dsl/geo-polygon-query.asciidoc
@@ -26,6 +26,20 @@ points. Here is an example:
 }
 --------------------------------------------------
 
+[float]
+==== Query Options
+
+[cols="<,<",options="header",]
+|=======================================================================
+|Option |Description
+|`_name` |Optional name field to identify the filter
+
+|`coerce` |Set to `true` to normalize longitude and latitude values to a
+standard -180:180 / -90:90 coordinate system. (default is `false`).
+
+|`ignore_malformed` |Set to `true` to accept geo points with invalid latitude or
+longitude (default is `false`).
+
 [float]
 ==== Allowed Formats
 

From 862402222225a4e6f669be41afd8f4d50bc1f14f Mon Sep 17 00:00:00 2001
From: Simon Willnauer 
Date: Mon, 17 Aug 2015 22:50:18 +0200
Subject: [PATCH 03/39] Print es.node.mode if integration tests fail

---
 .../test/junit/listeners/ReproduceInfoPrinter.java            | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/core/src/test/java/org/elasticsearch/test/junit/listeners/ReproduceInfoPrinter.java b/core/src/test/java/org/elasticsearch/test/junit/listeners/ReproduceInfoPrinter.java
index a824ec7def9..07f09cc2386 100644
--- a/core/src/test/java/org/elasticsearch/test/junit/listeners/ReproduceInfoPrinter.java
+++ b/core/src/test/java/org/elasticsearch/test/junit/listeners/ReproduceInfoPrinter.java
@@ -156,8 +156,8 @@ public class ReproduceInfoPrinter extends RunListener {
 
         public ReproduceErrorMessageBuilder appendESProperties() {
             appendProperties("es.logger.level");
-            if (!inVerifyPhase()) {
-                // these properties only make sense for unit tests
+            if (inVerifyPhase()) {
+                // these properties only make sense for integration tests
                 appendProperties("es.node.mode", "es.node.local", TESTS_CLUSTER, InternalTestCluster.TESTS_ENABLE_MOCK_MODULES);
             }
             appendProperties("tests.assertion.disabled", "tests.security.manager", "tests.nightly", "tests.jvms", 

From ee227efc62d43d6f70e09aeaf90200997d02dbec Mon Sep 17 00:00:00 2001
From: Nicholas Knize 
Date: Mon, 17 Aug 2015 16:49:16 -0500
Subject: [PATCH 04/39] move integration test dependency file gzippedmap.gz
 from sources to resources

---
 .../org/elasticsearch/search/geo/gzippedmap.gz      | Bin
 1 file changed, 0 insertions(+), 0 deletions(-)
 rename core/src/test/{java => resources}/org/elasticsearch/search/geo/gzippedmap.gz (100%)

diff --git a/core/src/test/java/org/elasticsearch/search/geo/gzippedmap.gz b/core/src/test/resources/org/elasticsearch/search/geo/gzippedmap.gz
similarity index 100%
rename from core/src/test/java/org/elasticsearch/search/geo/gzippedmap.gz
rename to core/src/test/resources/org/elasticsearch/search/geo/gzippedmap.gz

From 6f124e6eec416e3afb2c93c62ede72e91fe62177 Mon Sep 17 00:00:00 2001
From: Ryan Ernst 
Date: Mon, 17 Aug 2015 15:08:08 -0700
Subject: [PATCH 05/39] Internal: Simplify custom repository type setup

Custom repository types are registered through the RepositoriesModule.
Later, when a specific repository type is used, the RespositoryModule
is installed, which in turn would spawn the module that was
registered for that repository type. However, a module is not needed
here. Each repository type has two associated classes, a Repository and
an IndexShardRepository.

This change makes the registration method for custom repository
types take both of these classes, instead of a module.

See #12783.
---
 .../common/util/ExtensionPoint.java           |  7 +-
 .../repositories/RepositoriesModule.java      | 37 ++++------
 .../repositories/RepositoryModule.java        | 25 +------
 .../repositories/RepositoryTypesRegistry.java | 36 +++++-----
 .../repositories/fs/FsRepositoryModule.java   | 46 ------------
 .../repositories/uri/URLRepositoryModule.java | 46 ------------
 .../AbstractSnapshotIntegTestCase.java        |  3 +-
 .../DedicatedClusterSnapshotRestoreIT.java    | 21 +++---
 .../snapshots/RepositoriesIT.java             |  3 -
 .../SharedClusterSnapshotRestoreIT.java       |  1 -
 .../snapshots/mockstore/MockRepository.java   | 56 +++++++++++++--
 .../mockstore/MockRepositoryModule.java       | 42 -----------
 .../mockstore/MockRepositoryPlugin.java       | 71 -------------------
 .../plugin/cloud/aws/CloudAwsPlugin.java      |  4 +-
 .../repositories/s3/S3RepositoryModule.java   | 45 ------------
 .../cloud/azure/AzureModule.java              |  2 +
 .../plugin/cloud/azure/CloudAzurePlugin.java  | 10 ++-
 .../azure/AzureRepositoryModule.java          | 61 ----------------
 18 files changed, 111 insertions(+), 405 deletions(-)
 delete mode 100644 core/src/main/java/org/elasticsearch/repositories/fs/FsRepositoryModule.java
 delete mode 100644 core/src/main/java/org/elasticsearch/repositories/uri/URLRepositoryModule.java
 delete mode 100644 core/src/test/java/org/elasticsearch/snapshots/mockstore/MockRepositoryModule.java
 delete mode 100644 core/src/test/java/org/elasticsearch/snapshots/mockstore/MockRepositoryPlugin.java
 delete mode 100644 plugins/cloud-aws/src/main/java/org/elasticsearch/repositories/s3/S3RepositoryModule.java
 delete mode 100644 plugins/cloud-azure/src/main/java/org/elasticsearch/repositories/azure/AzureRepositoryModule.java

diff --git a/core/src/main/java/org/elasticsearch/common/util/ExtensionPoint.java b/core/src/main/java/org/elasticsearch/common/util/ExtensionPoint.java
index 5414c4eb7a6..56163378327 100644
--- a/core/src/main/java/org/elasticsearch/common/util/ExtensionPoint.java
+++ b/core/src/main/java/org/elasticsearch/common/util/ExtensionPoint.java
@@ -133,13 +133,16 @@ public abstract class ExtensionPoint {
          * the settings object.
          *
          * @param binder       the binder to use
-         * @param settings     the settings to look up the key to find the implemetation to bind
+         * @param settings     the settings to look up the key to find the implementation to bind
          * @param settingsKey  the key to use with the settings
-         * @param defaultValue the default value if they settings doesn't contain the key
+         * @param defaultValue the default value if the settings do not contain the key, or null if there is no default
          * @return the actual bound type key
          */
         public String bindType(Binder binder, Settings settings, String settingsKey, String defaultValue) {
             final String type = settings.get(settingsKey, defaultValue);
+            if (type == null) {
+                throw new IllegalArgumentException("Missing setting [" + settingsKey + "]");
+            }
             final Class instance = getExtension(type);
             if (instance == null) {
                 throw new IllegalArgumentException("Unknown [" + this.name + "] type [" + type + "]");
diff --git a/core/src/main/java/org/elasticsearch/repositories/RepositoriesModule.java b/core/src/main/java/org/elasticsearch/repositories/RepositoriesModule.java
index bf745905e2a..acad2753f7a 100644
--- a/core/src/main/java/org/elasticsearch/repositories/RepositoriesModule.java
+++ b/core/src/main/java/org/elasticsearch/repositories/RepositoriesModule.java
@@ -19,44 +19,33 @@
 
 package org.elasticsearch.repositories;
 
-import com.google.common.collect.ImmutableMap;
-import com.google.common.collect.Maps;
 import org.elasticsearch.action.admin.cluster.snapshots.status.TransportNodesSnapshotsStatus;
 import org.elasticsearch.common.inject.AbstractModule;
-import org.elasticsearch.common.inject.Module;
+import org.elasticsearch.index.snapshots.IndexShardRepository;
+import org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardRepository;
 import org.elasticsearch.repositories.fs.FsRepository;
-import org.elasticsearch.repositories.fs.FsRepositoryModule;
 import org.elasticsearch.repositories.uri.URLRepository;
-import org.elasticsearch.repositories.uri.URLRepositoryModule;
 import org.elasticsearch.snapshots.RestoreService;
-import org.elasticsearch.snapshots.SnapshotsService;
 import org.elasticsearch.snapshots.SnapshotShardsService;
-
-import java.util.Map;
+import org.elasticsearch.snapshots.SnapshotsService;
 
 /**
- * Module responsible for registering other repositories.
- * 

- * Repositories implemented as plugins should implement {@code onModule(RepositoriesModule module)} method, in which - * they should register repository using {@link #registerRepository(String, Class)} method. + * Sets up classes for Snapshot/Restore. + * + * Plugins can add custom repository types by calling {@link #registerRepository(String, Class, Class)}. */ public class RepositoriesModule extends AbstractModule { - private Map> repositoryTypes = Maps.newHashMap(); + private final RepositoryTypesRegistry repositoryTypes = new RepositoryTypesRegistry(); public RepositoriesModule() { - registerRepository(FsRepository.TYPE, FsRepositoryModule.class); - registerRepository(URLRepository.TYPE, URLRepositoryModule.class); + registerRepository(FsRepository.TYPE, FsRepository.class, BlobStoreIndexShardRepository.class); + registerRepository(URLRepository.TYPE, URLRepository.class, BlobStoreIndexShardRepository.class); } - /** - * Registers a custom repository type name against a module. - * - * @param type The type - * @param module The module - */ - public void registerRepository(String type, Class module) { - repositoryTypes.put(type, module); + /** Registers a custom repository type to the given {@link Repository} and {@link IndexShardRepository}. */ + public void registerRepository(String type, Class repositoryType, Class shardRepositoryType) { + repositoryTypes.registerRepository(type, repositoryType, shardRepositoryType); } @Override @@ -66,6 +55,6 @@ public class RepositoriesModule extends AbstractModule { bind(SnapshotShardsService.class).asEagerSingleton(); bind(TransportNodesSnapshotsStatus.class).asEagerSingleton(); bind(RestoreService.class).asEagerSingleton(); - bind(RepositoryTypesRegistry.class).toInstance(new RepositoryTypesRegistry(ImmutableMap.copyOf(repositoryTypes))); + bind(RepositoryTypesRegistry.class).toInstance(repositoryTypes); } } diff --git a/core/src/main/java/org/elasticsearch/repositories/RepositoryModule.java b/core/src/main/java/org/elasticsearch/repositories/RepositoryModule.java index 2dccc2b0186..eca82cc78e8 100644 --- a/core/src/main/java/org/elasticsearch/repositories/RepositoryModule.java +++ b/core/src/main/java/org/elasticsearch/repositories/RepositoryModule.java @@ -19,7 +19,6 @@ package org.elasticsearch.repositories; -import com.google.common.collect.ImmutableList; import org.elasticsearch.common.inject.AbstractModule; import org.elasticsearch.common.inject.Module; import org.elasticsearch.common.inject.Modules; @@ -29,12 +28,10 @@ import org.elasticsearch.common.settings.Settings; import java.util.Arrays; import java.util.Collections; -import static org.elasticsearch.common.Strings.toCamelCase; - /** - * This module spawns specific repository module + * Binds repository classes for the specific repository type. */ -public class RepositoryModule extends AbstractModule implements SpawnModules { +public class RepositoryModule extends AbstractModule { private RepositoryName repositoryName; @@ -59,28 +56,12 @@ public class RepositoryModule extends AbstractModule implements SpawnModules { this.typesRegistry = typesRegistry; } - /** - * Returns repository module. - *

- * First repository type is looked up in typesRegistry and if it's not found there, this module tries to - * load repository by it's class name. - * - * @return repository module - */ - @Override - public Iterable spawnModules() { - Class repoModuleClass = typesRegistry.type(repositoryName.type()); - if (repoModuleClass == null) { - throw new IllegalArgumentException("Could not find repository type [" + repositoryName.getType() + "] for repository [" + repositoryName.getName() + "]"); - } - return Collections.unmodifiableList(Arrays.asList(Modules.createModule(repoModuleClass, globalSettings))); - } - /** * {@inheritDoc} */ @Override protected void configure() { + typesRegistry.bindType(binder(), repositoryName.type()); bind(RepositorySettings.class).toInstance(new RepositorySettings(globalSettings, settings)); } } diff --git a/core/src/main/java/org/elasticsearch/repositories/RepositoryTypesRegistry.java b/core/src/main/java/org/elasticsearch/repositories/RepositoryTypesRegistry.java index 1322b65203d..b1e3041e746 100644 --- a/core/src/main/java/org/elasticsearch/repositories/RepositoryTypesRegistry.java +++ b/core/src/main/java/org/elasticsearch/repositories/RepositoryTypesRegistry.java @@ -19,31 +19,29 @@ package org.elasticsearch.repositories; -import com.google.common.collect.ImmutableMap; -import org.elasticsearch.common.inject.Module; +import org.elasticsearch.common.inject.Binder; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.ExtensionPoint; +import org.elasticsearch.index.snapshots.IndexShardRepository; /** - * Map of registered repository types and associated with these types modules + * A mapping from type name to implementations of {@link Repository} and {@link IndexShardRepository}. */ public class RepositoryTypesRegistry { - private final ImmutableMap> repositoryTypes; + // invariant: repositories and shardRepositories have the same keyset + private final ExtensionPoint.TypeExtensionPoint repositoryTypes = + new ExtensionPoint.TypeExtensionPoint<>("repository", Repository.class); + private final ExtensionPoint.TypeExtensionPoint shardRepositoryTypes = + new ExtensionPoint.TypeExtensionPoint<>("index_repository", IndexShardRepository.class); - /** - * Creates new repository with given map of types - * - * @param repositoryTypes - */ - public RepositoryTypesRegistry(ImmutableMap> repositoryTypes) { - this.repositoryTypes = repositoryTypes; + public void registerRepository(String name, Class repositoryType, Class shardRepositoryType) { + repositoryTypes.registerExtension(name, repositoryType); + shardRepositoryTypes.registerExtension(name, shardRepositoryType); } - /** - * Returns repository module class for the given type - * - * @param type repository type - * @return repository module class or null if type is not found - */ - public Class type(String type) { - return repositoryTypes.get(type); + public void bindType(Binder binder, String type) { + Settings settings = Settings.builder().put("type", type).build(); + repositoryTypes.bindType(binder, settings, "type", null); + shardRepositoryTypes.bindType(binder, settings, "type", null); } } diff --git a/core/src/main/java/org/elasticsearch/repositories/fs/FsRepositoryModule.java b/core/src/main/java/org/elasticsearch/repositories/fs/FsRepositoryModule.java deleted file mode 100644 index 2b8a4f4c15a..00000000000 --- a/core/src/main/java/org/elasticsearch/repositories/fs/FsRepositoryModule.java +++ /dev/null @@ -1,46 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.repositories.fs; - -import org.elasticsearch.common.inject.AbstractModule; -import org.elasticsearch.index.snapshots.IndexShardRepository; -import org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardRepository; -import org.elasticsearch.repositories.Repository; - -/** - * File system repository module - */ -public class FsRepositoryModule extends AbstractModule { - - public FsRepositoryModule() { - super(); - } - - /** - * {@inheritDoc} - */ - @Override - protected void configure() { - bind(Repository.class).to(FsRepository.class).asEagerSingleton(); - bind(IndexShardRepository.class).to(BlobStoreIndexShardRepository.class).asEagerSingleton(); - } - -} - diff --git a/core/src/main/java/org/elasticsearch/repositories/uri/URLRepositoryModule.java b/core/src/main/java/org/elasticsearch/repositories/uri/URLRepositoryModule.java deleted file mode 100644 index 949a1b77f2c..00000000000 --- a/core/src/main/java/org/elasticsearch/repositories/uri/URLRepositoryModule.java +++ /dev/null @@ -1,46 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.repositories.uri; - -import org.elasticsearch.common.inject.AbstractModule; -import org.elasticsearch.index.snapshots.IndexShardRepository; -import org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardRepository; -import org.elasticsearch.repositories.Repository; - -/** - * URL repository module - */ -public class URLRepositoryModule extends AbstractModule { - - public URLRepositoryModule() { - super(); - } - - /** - * {@inheritDoc} - */ - @Override - protected void configure() { - bind(Repository.class).to(URLRepository.class).asEagerSingleton(); - bind(IndexShardRepository.class).to(BlobStoreIndexShardRepository.class).asEagerSingleton(); - } - -} - diff --git a/core/src/test/java/org/elasticsearch/snapshots/AbstractSnapshotIntegTestCase.java b/core/src/test/java/org/elasticsearch/snapshots/AbstractSnapshotIntegTestCase.java index 75e910f48dd..2544f14fa3d 100644 --- a/core/src/test/java/org/elasticsearch/snapshots/AbstractSnapshotIntegTestCase.java +++ b/core/src/test/java/org/elasticsearch/snapshots/AbstractSnapshotIntegTestCase.java @@ -34,7 +34,6 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.repositories.RepositoriesService; import org.elasticsearch.snapshots.mockstore.MockRepository; -import org.elasticsearch.snapshots.mockstore.MockRepositoryPlugin; import org.elasticsearch.test.ESIntegTestCase; import java.io.IOException; @@ -57,7 +56,7 @@ public abstract class AbstractSnapshotIntegTestCase extends ESIntegTestCase { @Override protected Settings nodeSettings(int nodeOrdinal) { return settingsBuilder().put(super.nodeSettings(nodeOrdinal)) - .extendArray("plugin.types", MockRepositoryPlugin.class.getName()).build(); + .extendArray("plugin.types", MockRepository.Plugin.class.getName()).build(); } public static long getFailureCount(String repository) { diff --git a/core/src/test/java/org/elasticsearch/snapshots/DedicatedClusterSnapshotRestoreIT.java b/core/src/test/java/org/elasticsearch/snapshots/DedicatedClusterSnapshotRestoreIT.java index fa0ebdec5a0..23ea847e2b1 100644 --- a/core/src/test/java/org/elasticsearch/snapshots/DedicatedClusterSnapshotRestoreIT.java +++ b/core/src/test/java/org/elasticsearch/snapshots/DedicatedClusterSnapshotRestoreIT.java @@ -24,7 +24,6 @@ import com.carrotsearch.hppc.IntSet; import com.google.common.base.Predicate; import com.google.common.collect.ImmutableList; import com.google.common.util.concurrent.ListenableFuture; - import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.action.ListenableActionFuture; import org.elasticsearch.action.admin.cluster.repositories.put.PutRepositoryResponse; @@ -41,8 +40,8 @@ import org.elasticsearch.cluster.AbstractDiffable; import org.elasticsearch.cluster.ClusterService; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.ProcessedClusterStateUpdateTask; -import org.elasticsearch.cluster.metadata.MetaData.Custom; import org.elasticsearch.cluster.metadata.MetaData; +import org.elasticsearch.cluster.metadata.MetaData.Custom; import org.elasticsearch.cluster.metadata.MetaDataIndexStateService; import org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider; import org.elasticsearch.common.Nullable; @@ -64,11 +63,9 @@ import org.elasticsearch.rest.RestRequest; import org.elasticsearch.rest.RestResponse; import org.elasticsearch.rest.action.admin.cluster.repositories.get.RestGetRepositoriesAction; import org.elasticsearch.rest.action.admin.cluster.state.RestClusterStateAction; -import org.elasticsearch.snapshots.mockstore.MockRepositoryModule; -import org.elasticsearch.snapshots.mockstore.MockRepositoryPlugin; +import org.elasticsearch.snapshots.mockstore.MockRepository; import org.elasticsearch.test.InternalTestCluster; import org.elasticsearch.test.rest.FakeRestRequest; -import org.junit.Ignore; import org.junit.Test; import java.io.IOException; @@ -88,7 +85,15 @@ import static org.elasticsearch.test.ESIntegTestCase.Scope; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertBlocked; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertThrows; -import static org.hamcrest.Matchers.*; +import static org.hamcrest.Matchers.allOf; +import static org.hamcrest.Matchers.containsString; +import static org.hamcrest.Matchers.equalTo; +import static org.hamcrest.Matchers.greaterThan; +import static org.hamcrest.Matchers.greaterThanOrEqualTo; +import static org.hamcrest.Matchers.lessThan; +import static org.hamcrest.Matchers.not; +import static org.hamcrest.Matchers.notNullValue; +import static org.hamcrest.Matchers.nullValue; /** */ @@ -615,7 +620,7 @@ public class DedicatedClusterSnapshotRestoreIT extends AbstractSnapshotIntegTest @Test public void registrationFailureTest() { logger.info("--> start first node"); - internalCluster().startNode(settingsBuilder().put("plugin.types", MockRepositoryPlugin.class.getName())); + internalCluster().startNode(settingsBuilder().put("plugin.types", MockRepository.Plugin.class.getName())); logger.info("--> start second node"); // Make sure the first node is elected as master internalCluster().startNode(settingsBuilder().put("node.master", false)); @@ -634,7 +639,7 @@ public class DedicatedClusterSnapshotRestoreIT extends AbstractSnapshotIntegTest @Test public void testThatSensitiveRepositorySettingsAreNotExposed() throws Exception { - Settings nodeSettings = settingsBuilder().put("plugin.types", MockRepositoryPlugin.class.getName()).build(); + Settings nodeSettings = settingsBuilder().put("plugin.types", MockRepository.Plugin.class.getName()).build(); logger.info("--> start two nodes"); internalCluster().startNodesAsync(2, nodeSettings).get(); // Register mock repositories diff --git a/core/src/test/java/org/elasticsearch/snapshots/RepositoriesIT.java b/core/src/test/java/org/elasticsearch/snapshots/RepositoriesIT.java index 27a67588df9..c5221d12f3b 100644 --- a/core/src/test/java/org/elasticsearch/snapshots/RepositoriesIT.java +++ b/core/src/test/java/org/elasticsearch/snapshots/RepositoriesIT.java @@ -32,15 +32,12 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.ByteSizeUnit; import org.elasticsearch.repositories.RepositoryException; import org.elasticsearch.repositories.RepositoryVerificationException; -import org.elasticsearch.snapshots.mockstore.MockRepositoryModule; -import org.elasticsearch.snapshots.mockstore.MockRepositoryPlugin; import org.elasticsearch.test.ESIntegTestCase; import org.junit.Test; import java.nio.file.Path; import java.util.List; -import static org.elasticsearch.common.settings.Settings.settingsBuilder; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked; import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertThrows; import static org.hamcrest.Matchers.containsString; diff --git a/core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java b/core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java index df7eb4e3b9b..d89eea7206a 100644 --- a/core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java +++ b/core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java @@ -64,7 +64,6 @@ import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.store.IndexStore; import org.elasticsearch.indices.InvalidIndexNameException; import org.elasticsearch.repositories.RepositoriesService; -import org.elasticsearch.snapshots.mockstore.MockRepositoryModule; import org.elasticsearch.test.junit.annotations.TestLogging; import org.junit.Test; diff --git a/core/src/test/java/org/elasticsearch/snapshots/mockstore/MockRepository.java b/core/src/test/java/org/elasticsearch/snapshots/mockstore/MockRepository.java index 12e51475c9d..c346e5817bc 100644 --- a/core/src/test/java/org/elasticsearch/snapshots/mockstore/MockRepository.java +++ b/core/src/test/java/org/elasticsearch/snapshots/mockstore/MockRepository.java @@ -19,10 +19,8 @@ package org.elasticsearch.snapshots.mockstore; -import com.google.common.collect.ImmutableList; -import com.google.common.collect.ImmutableMap; - import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.Version; import org.elasticsearch.cluster.ClusterService; import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.metadata.SnapshotId; @@ -30,11 +28,17 @@ import org.elasticsearch.common.blobstore.BlobContainer; import org.elasticsearch.common.blobstore.BlobMetaData; import org.elasticsearch.common.blobstore.BlobPath; import org.elasticsearch.common.blobstore.BlobStore; +import org.elasticsearch.common.inject.AbstractModule; import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.inject.Module; import org.elasticsearch.common.io.PathUtils; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.settings.SettingsFilter; import org.elasticsearch.env.Environment; import org.elasticsearch.index.snapshots.IndexShardRepository; +import org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardRepository; +import org.elasticsearch.plugins.AbstractPlugin; +import org.elasticsearch.repositories.RepositoriesModule; import org.elasticsearch.repositories.RepositoryName; import org.elasticsearch.repositories.RepositorySettings; import org.elasticsearch.repositories.fs.FsRepository; @@ -46,6 +50,10 @@ import java.io.UnsupportedEncodingException; import java.nio.file.Path; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; import java.util.List; import java.util.Map; import java.util.concurrent.ConcurrentHashMap; @@ -54,10 +62,48 @@ import java.util.concurrent.atomic.AtomicLong; import static org.elasticsearch.common.settings.Settings.settingsBuilder; -/** - */ public class MockRepository extends FsRepository { + public static class Plugin extends AbstractPlugin { + + @Override + public String name() { + return "mock-repository"; + } + + @Override + public String description() { + return "Mock Repository"; + } + + public void onModule(RepositoriesModule repositoriesModule) { + repositoriesModule.registerRepository("mock", MockRepository.class, BlobStoreIndexShardRepository.class); + } + + @Override + public Collection> modules() { + Collection> modules = new ArrayList<>(); + modules.add(SettingsFilteringModule.class); + return modules; + } + + public static class SettingsFilteringModule extends AbstractModule { + + @Override + protected void configure() { + bind(SettingsFilteringService.class).asEagerSingleton(); + } + } + + public static class SettingsFilteringService { + @Inject + public SettingsFilteringService(SettingsFilter settingsFilter) { + settingsFilter.addFilter("secret.mock.password"); + } + } + + } + private final AtomicLong failureCounter = new AtomicLong(); public long getFailureCount() { diff --git a/core/src/test/java/org/elasticsearch/snapshots/mockstore/MockRepositoryModule.java b/core/src/test/java/org/elasticsearch/snapshots/mockstore/MockRepositoryModule.java deleted file mode 100644 index 0da50f15d61..00000000000 --- a/core/src/test/java/org/elasticsearch/snapshots/mockstore/MockRepositoryModule.java +++ /dev/null @@ -1,42 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.snapshots.mockstore; - -import org.elasticsearch.common.inject.AbstractModule; -import org.elasticsearch.index.snapshots.IndexShardRepository; -import org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardRepository; -import org.elasticsearch.repositories.Repository; - -/** - */ -public class MockRepositoryModule extends AbstractModule { - - public MockRepositoryModule() { - super(); - } - - @Override - protected void configure() { - bind(Repository.class).to(MockRepository.class).asEagerSingleton(); - bind(IndexShardRepository.class).to(BlobStoreIndexShardRepository.class).asEagerSingleton(); - } - -} - diff --git a/core/src/test/java/org/elasticsearch/snapshots/mockstore/MockRepositoryPlugin.java b/core/src/test/java/org/elasticsearch/snapshots/mockstore/MockRepositoryPlugin.java deleted file mode 100644 index a09c8601f7f..00000000000 --- a/core/src/test/java/org/elasticsearch/snapshots/mockstore/MockRepositoryPlugin.java +++ /dev/null @@ -1,71 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.snapshots.mockstore; - -import org.elasticsearch.common.inject.AbstractModule; -import org.elasticsearch.common.inject.Inject; -import org.elasticsearch.common.inject.Module; -import org.elasticsearch.common.settings.SettingsFilter; -import org.elasticsearch.plugins.AbstractPlugin; -import org.elasticsearch.repositories.RepositoriesModule; - -import java.util.Collection; - -import static com.google.common.collect.Lists.newArrayList; - -public class MockRepositoryPlugin extends AbstractPlugin { - - @Override - public String name() { - return "mock-repository"; - } - - @Override - public String description() { - return "Mock Repository"; - } - - public void onModule(RepositoriesModule repositoriesModule) { - repositoriesModule.registerRepository("mock", MockRepositoryModule.class); - } - - @Override - public Collection> modules() { - Collection> modules = newArrayList(); - modules.add(SettingsFilteringModule.class); - return modules; - } - - public static class SettingsFilteringModule extends AbstractModule { - - @Override - protected void configure() { - bind(SettingsFilteringService.class).asEagerSingleton(); - } - } - - public static class SettingsFilteringService { - @Inject - public SettingsFilteringService(SettingsFilter settingsFilter) { - settingsFilter.addFilter("secret.mock.password"); - } - } - -} diff --git a/plugins/cloud-aws/src/main/java/org/elasticsearch/plugin/cloud/aws/CloudAwsPlugin.java b/plugins/cloud-aws/src/main/java/org/elasticsearch/plugin/cloud/aws/CloudAwsPlugin.java index 4b16400897f..55a0ae51fb1 100644 --- a/plugins/cloud-aws/src/main/java/org/elasticsearch/plugin/cloud/aws/CloudAwsPlugin.java +++ b/plugins/cloud-aws/src/main/java/org/elasticsearch/plugin/cloud/aws/CloudAwsPlugin.java @@ -26,10 +26,10 @@ import org.elasticsearch.common.inject.Module; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.discovery.DiscoveryModule; import org.elasticsearch.discovery.ec2.Ec2Discovery; +import org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardRepository; import org.elasticsearch.plugins.AbstractPlugin; import org.elasticsearch.repositories.RepositoriesModule; import org.elasticsearch.repositories.s3.S3Repository; -import org.elasticsearch.repositories.s3.S3RepositoryModule; import java.util.ArrayList; import java.util.Collection; @@ -76,7 +76,7 @@ public class CloudAwsPlugin extends AbstractPlugin { public void onModule(RepositoriesModule repositoriesModule) { if (settings.getAsBoolean("cloud.enabled", true)) { - repositoriesModule.registerRepository(S3Repository.TYPE, S3RepositoryModule.class); + repositoriesModule.registerRepository(S3Repository.TYPE, S3Repository.class, BlobStoreIndexShardRepository.class); } } diff --git a/plugins/cloud-aws/src/main/java/org/elasticsearch/repositories/s3/S3RepositoryModule.java b/plugins/cloud-aws/src/main/java/org/elasticsearch/repositories/s3/S3RepositoryModule.java deleted file mode 100644 index 6d51d61fde2..00000000000 --- a/plugins/cloud-aws/src/main/java/org/elasticsearch/repositories/s3/S3RepositoryModule.java +++ /dev/null @@ -1,45 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.repositories.s3; - -import org.elasticsearch.common.inject.AbstractModule; -import org.elasticsearch.index.snapshots.IndexShardRepository; -import org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardRepository; -import org.elasticsearch.repositories.Repository; - -/** - * S3 repository module - */ -public class S3RepositoryModule extends AbstractModule { - - public S3RepositoryModule() { - super(); - } - - /** - * {@inheritDoc} - */ - @Override - protected void configure() { - bind(Repository.class).to(S3Repository.class).asEagerSingleton(); - bind(IndexShardRepository.class).to(BlobStoreIndexShardRepository.class).asEagerSingleton(); - } -} - diff --git a/plugins/cloud-azure/src/main/java/org/elasticsearch/cloud/azure/AzureModule.java b/plugins/cloud-azure/src/main/java/org/elasticsearch/cloud/azure/AzureModule.java index eec355af760..29d260ff97a 100644 --- a/plugins/cloud-azure/src/main/java/org/elasticsearch/cloud/azure/AzureModule.java +++ b/plugins/cloud-azure/src/main/java/org/elasticsearch/cloud/azure/AzureModule.java @@ -27,6 +27,7 @@ import org.elasticsearch.cloud.azure.management.AzureComputeSettingsFilter; import org.elasticsearch.cloud.azure.storage.AzureStorageService; import org.elasticsearch.cloud.azure.storage.AzureStorageService.Storage; import org.elasticsearch.cloud.azure.storage.AzureStorageServiceImpl; +import org.elasticsearch.cloud.azure.storage.AzureStorageSettingsFilter; import org.elasticsearch.common.Strings; import org.elasticsearch.common.inject.AbstractModule; import org.elasticsearch.common.inject.Inject; @@ -73,6 +74,7 @@ public class AzureModule extends AbstractModule { @Override protected void configure() { logger.debug("starting azure services"); + bind(AzureStorageSettingsFilter.class).asEagerSingleton(); bind(AzureComputeSettingsFilter.class).asEagerSingleton(); // If we have set discovery to azure, let's start the azure compute service diff --git a/plugins/cloud-azure/src/main/java/org/elasticsearch/plugin/cloud/azure/CloudAzurePlugin.java b/plugins/cloud-azure/src/main/java/org/elasticsearch/plugin/cloud/azure/CloudAzurePlugin.java index e3b5b455584..f60f05da8c3 100644 --- a/plugins/cloud-azure/src/main/java/org/elasticsearch/plugin/cloud/azure/CloudAzurePlugin.java +++ b/plugins/cloud-azure/src/main/java/org/elasticsearch/plugin/cloud/azure/CloudAzurePlugin.java @@ -26,13 +26,13 @@ import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.discovery.DiscoveryModule; import org.elasticsearch.discovery.azure.AzureDiscovery; +import org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardRepository; import org.elasticsearch.index.store.IndexStoreModule; import org.elasticsearch.index.store.smbmmapfs.SmbMmapFsIndexStore; import org.elasticsearch.index.store.smbsimplefs.SmbSimpleFsIndexStore; import org.elasticsearch.plugins.AbstractPlugin; import org.elasticsearch.repositories.RepositoriesModule; import org.elasticsearch.repositories.azure.AzureRepository; -import org.elasticsearch.repositories.azure.AzureRepositoryModule; import java.util.ArrayList; import java.util.Collection; @@ -71,11 +71,9 @@ public class CloudAzurePlugin extends AbstractPlugin { return modules; } - @Override - public void processModule(Module module) { - if (isSnapshotReady(settings, logger) - && module instanceof RepositoriesModule) { - ((RepositoriesModule)module).registerRepository(AzureRepository.TYPE, AzureRepositoryModule.class); + public void onModule(RepositoriesModule module) { + if (isSnapshotReady(settings, logger)) { + module.registerRepository(AzureRepository.TYPE, AzureRepository.class, BlobStoreIndexShardRepository.class); } } diff --git a/plugins/cloud-azure/src/main/java/org/elasticsearch/repositories/azure/AzureRepositoryModule.java b/plugins/cloud-azure/src/main/java/org/elasticsearch/repositories/azure/AzureRepositoryModule.java deleted file mode 100644 index 11d420b1b07..00000000000 --- a/plugins/cloud-azure/src/main/java/org/elasticsearch/repositories/azure/AzureRepositoryModule.java +++ /dev/null @@ -1,61 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.repositories.azure; - -import org.elasticsearch.cloud.azure.AzureModule; -import org.elasticsearch.cloud.azure.storage.AzureStorageSettingsFilter; -import org.elasticsearch.common.inject.AbstractModule; -import org.elasticsearch.common.logging.ESLogger; -import org.elasticsearch.common.logging.Loggers; -import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.index.snapshots.IndexShardRepository; -import org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardRepository; -import org.elasticsearch.repositories.Repository; - -/** - * Azure repository module - */ -public class AzureRepositoryModule extends AbstractModule { - - protected final ESLogger logger; - private Settings settings; - - public AzureRepositoryModule(Settings settings) { - super(); - this.logger = Loggers.getLogger(getClass(), settings); - this.settings = settings; - } - - /** - * {@inheritDoc} - */ - @Override - protected void configure() { - bind(AzureStorageSettingsFilter.class).asEagerSingleton(); - if (AzureModule.isSnapshotReady(settings, logger)) { - bind(Repository.class).to(AzureRepository.class).asEagerSingleton(); - bind(IndexShardRepository.class).to(BlobStoreIndexShardRepository.class).asEagerSingleton(); - } else { - logger.debug("disabling azure snapshot and restore features"); - } - } - -} - From 4114a8359decd03af9f4365a0ab218f2971b17c3 Mon Sep 17 00:00:00 2001 From: Igor Motov Date: Mon, 17 Aug 2015 21:30:13 -0400 Subject: [PATCH 06/39] Improve stability of Snapshot/Restore test The restore portion of some snapshot/restore test is failing randomly due to #9421. This change suspends rebalance during snapshot/restore operations until #9421 is fixed. Closes #12855 --- .../snapshots/AbstractSnapshotIntegTestCase.java | 6 +++++- .../snapshots/SharedClusterSnapshotRestoreIT.java | 6 ++---- 2 files changed, 7 insertions(+), 5 deletions(-) diff --git a/core/src/test/java/org/elasticsearch/snapshots/AbstractSnapshotIntegTestCase.java b/core/src/test/java/org/elasticsearch/snapshots/AbstractSnapshotIntegTestCase.java index 75e910f48dd..a9e2bc57b5f 100644 --- a/core/src/test/java/org/elasticsearch/snapshots/AbstractSnapshotIntegTestCase.java +++ b/core/src/test/java/org/elasticsearch/snapshots/AbstractSnapshotIntegTestCase.java @@ -28,6 +28,7 @@ import org.elasticsearch.cluster.ClusterStateListener; import org.elasticsearch.cluster.ClusterStateUpdateTask; import org.elasticsearch.cluster.SnapshotsInProgress; import org.elasticsearch.cluster.metadata.SnapshotId; +import org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider; import org.elasticsearch.cluster.service.PendingClusterTask; import org.elasticsearch.common.Priority; import org.elasticsearch.common.settings.Settings; @@ -57,7 +58,10 @@ public abstract class AbstractSnapshotIntegTestCase extends ESIntegTestCase { @Override protected Settings nodeSettings(int nodeOrdinal) { return settingsBuilder().put(super.nodeSettings(nodeOrdinal)) - .extendArray("plugin.types", MockRepositoryPlugin.class.getName()).build(); + // Rebalancing is causing some checks after restore to randomly fail + // due to https://github.com/elastic/elasticsearch/issues/9421 + .put(EnableAllocationDecider.CLUSTER_ROUTING_REBALANCE_ENABLE, EnableAllocationDecider.Rebalance.NONE) + .extendArray("plugin.types", MockRepositoryPlugin.class.getName()).build(); } public static long getFailureCount(String repository) { diff --git a/core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java b/core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java index df7eb4e3b9b..7e282305c89 100644 --- a/core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java +++ b/core/src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreIT.java @@ -64,7 +64,6 @@ import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.store.IndexStore; import org.elasticsearch.indices.InvalidIndexNameException; import org.elasticsearch.repositories.RepositoriesService; -import org.elasticsearch.snapshots.mockstore.MockRepositoryModule; import org.elasticsearch.test.junit.annotations.TestLogging; import org.junit.Test; @@ -308,7 +307,7 @@ public class SharedClusterSnapshotRestoreIT extends AbstractSnapshotIntegTestCas logger.info("--> create index with foo type"); assertAcked(prepareCreate("test-idx", 2, Settings.builder() - .put(indexSettings()).put(SETTING_NUMBER_OF_REPLICAS, between(0, 1)).put("refresh_interval", 10, TimeUnit.SECONDS))); + .put(indexSettings()).put(SETTING_NUMBER_OF_REPLICAS, between(0, 1)).put("refresh_interval", 10, TimeUnit.SECONDS))); NumShards numShards = getNumShards("test-idx"); @@ -323,7 +322,7 @@ public class SharedClusterSnapshotRestoreIT extends AbstractSnapshotIntegTestCas logger.info("--> delete the index and recreate it with bar type"); cluster().wipeIndices("test-idx"); assertAcked(prepareCreate("test-idx", 2, Settings.builder() - .put(SETTING_NUMBER_OF_SHARDS, numShards.numPrimaries).put(SETTING_NUMBER_OF_REPLICAS, between(0, 1)).put("refresh_interval", 5, TimeUnit.SECONDS))); + .put(SETTING_NUMBER_OF_SHARDS, numShards.numPrimaries).put(SETTING_NUMBER_OF_REPLICAS, between(0, 1)).put("refresh_interval", 5, TimeUnit.SECONDS))); assertAcked(client().admin().indices().preparePutMapping("test-idx").setType("bar").setSource("baz", "type=string")); ensureGreen(); @@ -996,7 +995,6 @@ public class SharedClusterSnapshotRestoreIT extends AbstractSnapshotIntegTestCas } @Test - @AwaitsFix(bugUrl = "https://github.com/elastic/elasticsearch/issues/12855") public void renameOnRestoreTest() throws Exception { Client client = client(); From 3ca12889e58909e96fa70120d867d181a7d78a53 Mon Sep 17 00:00:00 2001 From: Robert Muir Date: Mon, 17 Aug 2015 22:52:22 -0400 Subject: [PATCH 07/39] Use preferIPv6Addresses for sort order, not preferIPv4Stack java.net.preferIPv6Addresses is a better choice. preferIPv4Stack is a nuclear option and you just won't even bind to any IPv6 addresses. This reduces confusion. --- .../org/elasticsearch/common/network/NetworkUtils.java | 8 ++++---- .../elasticsearch/common/network/NetworkUtilsTests.java | 4 ++-- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/common/network/NetworkUtils.java b/core/src/main/java/org/elasticsearch/common/network/NetworkUtils.java index 14c8d3d794e..81bf63dae4f 100644 --- a/core/src/main/java/org/elasticsearch/common/network/NetworkUtils.java +++ b/core/src/main/java/org/elasticsearch/common/network/NetworkUtils.java @@ -51,12 +51,12 @@ public abstract class NetworkUtils { * @deprecated transition mechanism only */ @Deprecated - static final boolean PREFER_V4 = Boolean.parseBoolean(System.getProperty("java.net.preferIPv4Stack", "true")); + static final boolean PREFER_V6 = Boolean.parseBoolean(System.getProperty("java.net.preferIPv6Addresses", "false")); /** Sorts an address by preference. This way code like publishing can just pick the first one */ - static int sortKey(InetAddress address, boolean prefer_v4) { + static int sortKey(InetAddress address, boolean prefer_v6) { int key = address.getAddress().length; - if (prefer_v4 == false) { + if (prefer_v6) { key = -key; } @@ -88,7 +88,7 @@ public abstract class NetworkUtils { Collections.sort(list, new Comparator() { @Override public int compare(InetAddress left, InetAddress right) { - int cmp = Integer.compare(sortKey(left, PREFER_V4), sortKey(right, PREFER_V4)); + int cmp = Integer.compare(sortKey(left, PREFER_V6), sortKey(right, PREFER_V6)); if (cmp == 0) { cmp = new BytesRef(left.getAddress()).compareTo(new BytesRef(right.getAddress())); } diff --git a/core/src/test/java/org/elasticsearch/common/network/NetworkUtilsTests.java b/core/src/test/java/org/elasticsearch/common/network/NetworkUtilsTests.java index fdcaef3e193..e5b95f258a3 100644 --- a/core/src/test/java/org/elasticsearch/common/network/NetworkUtilsTests.java +++ b/core/src/test/java/org/elasticsearch/common/network/NetworkUtilsTests.java @@ -34,8 +34,8 @@ public class NetworkUtilsTests extends ESTestCase { public void testSortKey() throws Exception { InetAddress localhostv4 = InetAddress.getByName("127.0.0.1"); InetAddress localhostv6 = InetAddress.getByName("::1"); - assertTrue(NetworkUtils.sortKey(localhostv4, true) < NetworkUtils.sortKey(localhostv6, true)); - assertTrue(NetworkUtils.sortKey(localhostv6, false) < NetworkUtils.sortKey(localhostv4, false)); + assertTrue(NetworkUtils.sortKey(localhostv4, false) < NetworkUtils.sortKey(localhostv6, false)); + assertTrue(NetworkUtils.sortKey(localhostv6, true) < NetworkUtils.sortKey(localhostv4, true)); } /** From 2b97f5d9ebf1b07fcf61be5cdb1c4bc1250c537d Mon Sep 17 00:00:00 2001 From: Martijn van Groningen Date: Fri, 14 Aug 2015 17:34:57 +0200 Subject: [PATCH 08/39] Pass down the EngineConfig to IndexSearcherWrapper If a new IndexSearcher gets created the engine config can be used to get the query cache and query cache policy from. --- .../elasticsearch/index/engine/IndexSearcherWrapper.java | 6 ++++-- .../index/engine/IndexSearcherWrappingService.java | 2 +- .../org/elasticsearch/index/engine/InternalEngineTests.java | 2 +- 3 files changed, 6 insertions(+), 4 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/index/engine/IndexSearcherWrapper.java b/core/src/main/java/org/elasticsearch/index/engine/IndexSearcherWrapper.java index c8a75f447b7..665d17a2f86 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/IndexSearcherWrapper.java +++ b/core/src/main/java/org/elasticsearch/index/engine/IndexSearcherWrapper.java @@ -36,10 +36,12 @@ public interface IndexSearcherWrapper { DirectoryReader wrap(DirectoryReader reader); /** - * @param searcher The provided index searcher to be wrapped to add custom functionality + * @param engineConfig The engine config which can be used to get the query cache and query cache policy from + * when creating a new index searcher + * @param searcher The provided index searcher to be wrapped to add custom functionality * @return a new index searcher wrapping the provided index searcher or if no wrapping was performed * the provided index searcher */ - IndexSearcher wrap(IndexSearcher searcher) throws EngineException; + IndexSearcher wrap(EngineConfig engineConfig, IndexSearcher searcher) throws EngineException; } diff --git a/core/src/main/java/org/elasticsearch/index/engine/IndexSearcherWrappingService.java b/core/src/main/java/org/elasticsearch/index/engine/IndexSearcherWrappingService.java index a0ea90e024e..23d05f01dc7 100644 --- a/core/src/main/java/org/elasticsearch/index/engine/IndexSearcherWrappingService.java +++ b/core/src/main/java/org/elasticsearch/index/engine/IndexSearcherWrappingService.java @@ -77,7 +77,7 @@ public final class IndexSearcherWrappingService { // TODO: Right now IndexSearcher isn't wrapper friendly, when it becomes wrapper friendly we should revise this extension point // For example if IndexSearcher#rewrite() is overwritten than also IndexSearcher#createNormalizedWeight needs to be overwritten // This needs to be fixed before we can allow the IndexSearcher from Engine to be wrapped multiple times - IndexSearcher indexSearcher = wrapper.wrap(innerIndexSearcher); + IndexSearcher indexSearcher = wrapper.wrap(engineConfig, innerIndexSearcher); if (reader == engineSearcher.reader() && indexSearcher == innerIndexSearcher) { return engineSearcher; } else { diff --git a/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java b/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java index 974a4a21557..8edc47410a3 100644 --- a/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java +++ b/core/src/test/java/org/elasticsearch/index/engine/InternalEngineTests.java @@ -535,7 +535,7 @@ public class InternalEngineTests extends ESTestCase { } @Override - public IndexSearcher wrap(IndexSearcher searcher) throws EngineException { + public IndexSearcher wrap(EngineConfig engineConfig, IndexSearcher searcher) throws EngineException { counter.incrementAndGet(); return searcher; } From a91b3fcbb9e493c3aac9b46723c8dcad2c23a3ae Mon Sep 17 00:00:00 2001 From: Adrien Grand Date: Mon, 17 Aug 2015 12:47:14 +0200 Subject: [PATCH 09/39] Move the `murmur3` field to a plugin and fix defaults. This move the `murmur3` field to the `mapper-murmur3` plugin and fixes its defaults so that values will not be indexed by default, as the only purpose of this field is to speed up `cardinality` aggregations on high-cardinality string fields, which only requires doc values. I also removed the `rehash` option from the `cardinality` aggregation as it doesn't bring much value (rehashing is cheap) and allowed to remove the coupling between the `cardinality` aggregation and the `murmur3` field. Close #12874 --- .../index/mapper/DocumentMapperParser.java | 3 +- .../index/mapper/MapperBuilders.java | 4 - .../elasticsearch/plugins/PluginManager.java | 1 + .../cardinality/CardinalityAggregator.java | 11 +- .../CardinalityAggregatorFactory.java | 12 +- .../cardinality/CardinalityParser.java | 20 +--- .../elasticsearch/plugins/plugin-install.help | 1 + .../org/elasticsearch/get/GetActionIT.java | 13 +-- .../plugins/PluginManagerIT.java | 1 + .../aggregations/metrics/CardinalityIT.java | 108 +++--------------- docs/plugins/mapper-murmur3.asciidoc | 101 ++++++++++++++++ docs/plugins/mapper.asciidoc | 7 +- .../metrics/cardinality-aggregation.asciidoc | 58 ++-------- docs/reference/mapping/types.asciidoc | 1 + .../migration/migrate_2_0/removals.asciidoc | 10 ++ plugins/mapper-murmur3/licenses/no_deps.txt | 1 + plugins/mapper-murmur3/pom.xml | 43 +++++++ .../test/mapper_murmur3/10_basic.yaml | 65 +++++++++++ .../mapper/murmur3}/Murmur3FieldMapper.java | 29 ++++- .../murmur3/RegisterMurmur3FieldMapper.java | 36 ++++++ .../mapper/MapperMurmur3IndexModule.java | 31 +++++ .../plugin/mapper/MapperMurmur3Plugin.java | 45 ++++++++ .../mapper/murmur3/MapperMurmur3RestIT.java | 42 +++++++ .../murmur3}/Murmur3FieldMapperTests.java | 25 +++- plugins/pom.xml | 1 + qa/smoke-test-plugins/pom.xml | 8 ++ 26 files changed, 477 insertions(+), 200 deletions(-) create mode 100644 docs/plugins/mapper-murmur3.asciidoc create mode 100644 plugins/mapper-murmur3/licenses/no_deps.txt create mode 100644 plugins/mapper-murmur3/pom.xml create mode 100644 plugins/mapper-murmur3/rest-api-spec/test/mapper_murmur3/10_basic.yaml rename {core/src/main/java/org/elasticsearch/index/mapper/core => plugins/mapper-murmur3/src/main/java/org/elasticsearch/index/mapper/murmur3}/Murmur3FieldMapper.java (85%) create mode 100644 plugins/mapper-murmur3/src/main/java/org/elasticsearch/index/mapper/murmur3/RegisterMurmur3FieldMapper.java create mode 100644 plugins/mapper-murmur3/src/main/java/org/elasticsearch/plugin/mapper/MapperMurmur3IndexModule.java create mode 100644 plugins/mapper-murmur3/src/main/java/org/elasticsearch/plugin/mapper/MapperMurmur3Plugin.java create mode 100644 plugins/mapper-murmur3/src/test/java/org/elasticsearch/index/mapper/murmur3/MapperMurmur3RestIT.java rename {core/src/test/java/org/elasticsearch/index/mapper/core => plugins/mapper-murmur3/src/test/java/org/elasticsearch/index/mapper/murmur3}/Murmur3FieldMapperTests.java (80%) diff --git a/core/src/main/java/org/elasticsearch/index/mapper/DocumentMapperParser.java b/core/src/main/java/org/elasticsearch/index/mapper/DocumentMapperParser.java index 5c6a10635ab..33281dc86f4 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/DocumentMapperParser.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/DocumentMapperParser.java @@ -101,8 +101,7 @@ public class DocumentMapperParser { .put(ObjectMapper.NESTED_CONTENT_TYPE, new ObjectMapper.TypeParser()) .put(TypeParsers.MULTI_FIELD_CONTENT_TYPE, TypeParsers.multiFieldConverterTypeParser) .put(CompletionFieldMapper.CONTENT_TYPE, new CompletionFieldMapper.TypeParser()) - .put(GeoPointFieldMapper.CONTENT_TYPE, new GeoPointFieldMapper.TypeParser()) - .put(Murmur3FieldMapper.CONTENT_TYPE, new Murmur3FieldMapper.TypeParser()); + .put(GeoPointFieldMapper.CONTENT_TYPE, new GeoPointFieldMapper.TypeParser()); if (ShapesAvailability.JTS_AVAILABLE) { typeParsersBuilder.put(GeoShapeFieldMapper.CONTENT_TYPE, new GeoShapeFieldMapper.TypeParser()); diff --git a/core/src/main/java/org/elasticsearch/index/mapper/MapperBuilders.java b/core/src/main/java/org/elasticsearch/index/mapper/MapperBuilders.java index 73c92ded424..41b657a73ca 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/MapperBuilders.java +++ b/core/src/main/java/org/elasticsearch/index/mapper/MapperBuilders.java @@ -84,10 +84,6 @@ public final class MapperBuilders { return new LongFieldMapper.Builder(name); } - public static Murmur3FieldMapper.Builder murmur3Field(String name) { - return new Murmur3FieldMapper.Builder(name); - } - public static FloatFieldMapper.Builder floatField(String name) { return new FloatFieldMapper.Builder(name); } diff --git a/core/src/main/java/org/elasticsearch/plugins/PluginManager.java b/core/src/main/java/org/elasticsearch/plugins/PluginManager.java index fd140ec3303..3dbc2e9d2c2 100644 --- a/core/src/main/java/org/elasticsearch/plugins/PluginManager.java +++ b/core/src/main/java/org/elasticsearch/plugins/PluginManager.java @@ -86,6 +86,7 @@ public class PluginManager { "elasticsearch-delete-by-query", "elasticsearch-lang-javascript", "elasticsearch-lang-python", + "elasticsearch-mapper-murmur3", "elasticsearch-mapper-size" ).build(); diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregator.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregator.java index 6a72718f134..0a3918de46b 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregator.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregator.java @@ -56,7 +56,6 @@ import java.util.Map; public class CardinalityAggregator extends NumericMetricsAggregator.SingleValue { private final int precision; - private final boolean rehash; private final ValuesSource valuesSource; // Expensive to initialize, so we only initialize it when we have an actual value source @@ -66,11 +65,10 @@ public class CardinalityAggregator extends NumericMetricsAggregator.SingleValue private Collector collector; private ValueFormatter formatter; - public CardinalityAggregator(String name, ValuesSource valuesSource, boolean rehash, int precision, ValueFormatter formatter, + public CardinalityAggregator(String name, ValuesSource valuesSource, int precision, ValueFormatter formatter, AggregationContext context, Aggregator parent, List pipelineAggregators, Map metaData) throws IOException { super(name, context, parent, pipelineAggregators, metaData); this.valuesSource = valuesSource; - this.rehash = rehash; this.precision = precision; this.counts = valuesSource == null ? null : new HyperLogLogPlusPlus(precision, context.bigArrays(), 1); this.formatter = formatter; @@ -85,13 +83,6 @@ public class CardinalityAggregator extends NumericMetricsAggregator.SingleValue if (valuesSource == null) { return new EmptyCollector(); } - // if rehash is false then the value source is either already hashed, or the user explicitly - // requested not to hash the values (perhaps they already hashed the values themselves before indexing the doc) - // so we can just work with the original value source as is - if (!rehash) { - MurmurHash3Values hashValues = MurmurHash3Values.cast(((ValuesSource.Numeric) valuesSource).longValues(ctx)); - return new DirectCollector(counts, hashValues); - } if (valuesSource instanceof ValuesSource.Numeric) { ValuesSource.Numeric source = (ValuesSource.Numeric) valuesSource; diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregatorFactory.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregatorFactory.java index 9320020ca5b..1e660f06f45 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregatorFactory.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityAggregatorFactory.java @@ -19,7 +19,6 @@ package org.elasticsearch.search.aggregations.metrics.cardinality; -import org.elasticsearch.search.aggregations.AggregationExecutionException; import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator; import org.elasticsearch.search.aggregations.pipeline.PipelineAggregator; @@ -35,12 +34,10 @@ import java.util.Map; final class CardinalityAggregatorFactory extends ValuesSourceAggregatorFactory { private final long precisionThreshold; - private final boolean rehash; - CardinalityAggregatorFactory(String name, ValuesSourceConfig config, long precisionThreshold, boolean rehash) { + CardinalityAggregatorFactory(String name, ValuesSourceConfig config, long precisionThreshold) { super(name, InternalCardinality.TYPE.name(), config); this.precisionThreshold = precisionThreshold; - this.rehash = rehash; } private int precision(Aggregator parent) { @@ -50,16 +47,13 @@ final class CardinalityAggregatorFactory extends ValuesSourceAggregatorFactory pipelineAggregators, Map metaData) throws IOException { - return new CardinalityAggregator(name, null, true, precision(parent), config.formatter(), context, parent, pipelineAggregators, metaData); + return new CardinalityAggregator(name, null, precision(parent), config.formatter(), context, parent, pipelineAggregators, metaData); } @Override protected Aggregator doCreateInternal(ValuesSource valuesSource, AggregationContext context, Aggregator parent, boolean collectsFromSingleBucket, List pipelineAggregators, Map metaData) throws IOException { - if (!(valuesSource instanceof ValuesSource.Numeric) && !rehash) { - throw new AggregationExecutionException("Turning off rehashing for cardinality aggregation [" + name + "] on non-numeric values in not allowed"); - } - return new CardinalityAggregator(name, valuesSource, rehash, precision(parent), config.formatter(), context, parent, pipelineAggregators, + return new CardinalityAggregator(name, valuesSource, precision(parent), config.formatter(), context, parent, pipelineAggregators, metaData); } diff --git a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityParser.java b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityParser.java index afeca77d3a9..68339457fe7 100644 --- a/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityParser.java +++ b/core/src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityParser.java @@ -21,11 +21,9 @@ package org.elasticsearch.search.aggregations.metrics.cardinality; import org.elasticsearch.common.ParseField; import org.elasticsearch.common.xcontent.XContentParser; -import org.elasticsearch.index.mapper.core.Murmur3FieldMapper; import org.elasticsearch.search.SearchParseException; import org.elasticsearch.search.aggregations.Aggregator; import org.elasticsearch.search.aggregations.AggregatorFactory; -import org.elasticsearch.search.aggregations.support.ValuesSourceConfig; import org.elasticsearch.search.aggregations.support.ValuesSourceParser; import org.elasticsearch.search.internal.SearchContext; @@ -35,6 +33,7 @@ import java.io.IOException; public class CardinalityParser implements Aggregator.Parser { private static final ParseField PRECISION_THRESHOLD = new ParseField("precision_threshold"); + private static final ParseField REHASH = new ParseField("rehash").withAllDeprecated("no replacement - values will always be rehashed"); @Override public String type() { @@ -44,10 +43,9 @@ public class CardinalityParser implements Aggregator.Parser { @Override public AggregatorFactory parse(String name, XContentParser parser, SearchContext context) throws IOException { - ValuesSourceParser vsParser = ValuesSourceParser.any(name, InternalCardinality.TYPE, context).formattable(false).build(); + ValuesSourceParser vsParser = ValuesSourceParser.any(name, InternalCardinality.TYPE, context).formattable(false).build(); long precisionThreshold = -1; - Boolean rehash = null; XContentParser.Token token; String currentFieldName = null; @@ -57,8 +55,8 @@ public class CardinalityParser implements Aggregator.Parser { } else if (vsParser.token(currentFieldName, token, parser)) { continue; } else if (token.isValue()) { - if ("rehash".equals(currentFieldName)) { - rehash = parser.booleanValue(); + if (context.parseFieldMatcher().match(currentFieldName, REHASH)) { + // ignore } else if (context.parseFieldMatcher().match(currentFieldName, PRECISION_THRESHOLD)) { precisionThreshold = parser.longValue(); } else { @@ -70,15 +68,7 @@ public class CardinalityParser implements Aggregator.Parser { } } - ValuesSourceConfig config = vsParser.config(); - - if (rehash == null && config.fieldContext() != null && config.fieldContext().fieldType() instanceof Murmur3FieldMapper.Murmur3FieldType) { - rehash = false; - } else if (rehash == null) { - rehash = true; - } - - return new CardinalityAggregatorFactory(name, config, precisionThreshold, rehash); + return new CardinalityAggregatorFactory(name, vsParser.config(), precisionThreshold); } diff --git a/core/src/main/resources/org/elasticsearch/plugins/plugin-install.help b/core/src/main/resources/org/elasticsearch/plugins/plugin-install.help index 26fbf6a2313..61d47759a7a 100644 --- a/core/src/main/resources/org/elasticsearch/plugins/plugin-install.help +++ b/core/src/main/resources/org/elasticsearch/plugins/plugin-install.help @@ -43,6 +43,7 @@ OFFICIAL PLUGINS - elasticsearch-delete-by-query - elasticsearch-lang-javascript - elasticsearch-lang-python + - elasticsearch-mapper-murmur3 - elasticsearch-mapper-size diff --git a/core/src/test/java/org/elasticsearch/get/GetActionIT.java b/core/src/test/java/org/elasticsearch/get/GetActionIT.java index b376590f68b..743304df941 100644 --- a/core/src/test/java/org/elasticsearch/get/GetActionIT.java +++ b/core/src/test/java/org/elasticsearch/get/GetActionIT.java @@ -1116,7 +1116,7 @@ public class GetActionIT extends ESIntegTestCase { @Test public void testGeneratedNumberFieldsUnstored() throws IOException { indexSingleDocumentWithNumericFieldsGeneratedFromText(false, randomBoolean()); - String[] fieldsList = {"token_count", "text.token_count", "murmur", "text.murmur"}; + String[] fieldsList = {"token_count", "text.token_count"}; // before refresh - document is only in translog assertGetFieldsAlwaysNull(indexOrAlias(), "doc", "1", fieldsList); refresh(); @@ -1130,7 +1130,7 @@ public class GetActionIT extends ESIntegTestCase { @Test public void testGeneratedNumberFieldsStored() throws IOException { indexSingleDocumentWithNumericFieldsGeneratedFromText(true, randomBoolean()); - String[] fieldsList = {"token_count", "text.token_count", "murmur", "text.murmur"}; + String[] fieldsList = {"token_count", "text.token_count"}; // before refresh - document is only in translog assertGetFieldsNull(indexOrAlias(), "doc", "1", fieldsList); assertGetFieldsException(indexOrAlias(), "doc", "1", fieldsList); @@ -1159,10 +1159,6 @@ public class GetActionIT extends ESIntegTestCase { " \"analyzer\": \"standard\",\n" + " \"store\": \"" + storedString + "\"" + " },\n" + - " \"murmur\": {\n" + - " \"type\": \"murmur3\",\n" + - " \"store\": \"" + storedString + "\"" + - " },\n" + " \"text\": {\n" + " \"type\": \"string\",\n" + " \"fields\": {\n" + @@ -1170,10 +1166,6 @@ public class GetActionIT extends ESIntegTestCase { " \"type\": \"token_count\",\n" + " \"analyzer\": \"standard\",\n" + " \"store\": \"" + storedString + "\"" + - " },\n" + - " \"murmur\": {\n" + - " \"type\": \"murmur3\",\n" + - " \"store\": \"" + storedString + "\"" + " }\n" + " }\n" + " }" + @@ -1185,7 +1177,6 @@ public class GetActionIT extends ESIntegTestCase { assertAcked(prepareCreate("test").addAlias(new Alias("alias")).setSource(createIndexSource)); ensureGreen(); String doc = "{\n" + - " \"murmur\": \"Some value that can be hashed\",\n" + " \"token_count\": \"A text with five words.\",\n" + " \"text\": \"A text with five words.\"\n" + "}\n"; diff --git a/core/src/test/java/org/elasticsearch/plugins/PluginManagerIT.java b/core/src/test/java/org/elasticsearch/plugins/PluginManagerIT.java index bb2b01ca979..16cac39e0f7 100644 --- a/core/src/test/java/org/elasticsearch/plugins/PluginManagerIT.java +++ b/core/src/test/java/org/elasticsearch/plugins/PluginManagerIT.java @@ -550,6 +550,7 @@ public class PluginManagerIT extends ESIntegTestCase { PluginManager.checkForOfficialPlugins("elasticsearch-delete-by-query"); PluginManager.checkForOfficialPlugins("elasticsearch-lang-javascript"); PluginManager.checkForOfficialPlugins("elasticsearch-lang-python"); + PluginManager.checkForOfficialPlugins("elasticsearch-mapper-murmur3"); try { PluginManager.checkForOfficialPlugins("elasticsearch-mapper-attachment"); diff --git a/core/src/test/java/org/elasticsearch/search/aggregations/metrics/CardinalityIT.java b/core/src/test/java/org/elasticsearch/search/aggregations/metrics/CardinalityIT.java index 491e4f694c9..d77e4d1ccd0 100644 --- a/core/src/test/java/org/elasticsearch/search/aggregations/metrics/CardinalityIT.java +++ b/core/src/test/java/org/elasticsearch/search/aggregations/metrics/CardinalityIT.java @@ -61,54 +61,23 @@ public class CardinalityIT extends ESIntegTestCase { jsonBuilder().startObject().startObject("type").startObject("properties") .startObject("str_value") .field("type", "string") - .startObject("fields") - .startObject("hash") - .field("type", "murmur3") - .endObject() - .endObject() .endObject() .startObject("str_values") .field("type", "string") - .startObject("fields") - .startObject("hash") - .field("type", "murmur3") - .endObject() - .endObject() .endObject() .startObject("l_value") .field("type", "long") - .startObject("fields") - .startObject("hash") - .field("type", "murmur3") - .endObject() - .endObject() .endObject() .startObject("l_values") .field("type", "long") - .startObject("fields") - .startObject("hash") - .field("type", "murmur3") - .endObject() - .endObject() .endObject() - .startObject("d_value") - .field("type", "double") - .startObject("fields") - .startObject("hash") - .field("type", "murmur3") - .endObject() - .endObject() - .endObject() - .startObject("d_values") - .field("type", "double") - .startObject("fields") - .startObject("hash") - .field("type", "murmur3") - .endObject() - .endObject() - .endObject() - .endObject() - .endObject().endObject()).execute().actionGet(); + .startObject("d_value") + .field("type", "double") + .endObject() + .startObject("d_values") + .field("type", "double") + .endObject() + .endObject().endObject().endObject()).execute().actionGet(); numDocs = randomIntBetween(2, 100); precisionThreshold = randomIntBetween(0, 1 << randomInt(20)); @@ -145,12 +114,12 @@ public class CardinalityIT extends ESIntegTestCase { assertThat(count.getValue(), greaterThan(0L)); } } - private String singleNumericField(boolean hash) { - return (randomBoolean() ? "l_value" : "d_value") + (hash ? ".hash" : ""); + private String singleNumericField() { + return randomBoolean() ? "l_value" : "d_value"; } private String multiNumericField(boolean hash) { - return (randomBoolean() ? "l_values" : "d_values") + (hash ? ".hash" : ""); + return randomBoolean() ? "l_values" : "d_values"; } @Test @@ -195,24 +164,10 @@ public class CardinalityIT extends ESIntegTestCase { assertCount(count, numDocs); } - @Test - public void singleValuedStringHashed() throws Exception { - SearchResponse response = client().prepareSearch("idx").setTypes("type") - .addAggregation(cardinality("cardinality").precisionThreshold(precisionThreshold).field("str_value.hash")) - .execute().actionGet(); - - assertSearchResponse(response); - - Cardinality count = response.getAggregations().get("cardinality"); - assertThat(count, notNullValue()); - assertThat(count.getName(), equalTo("cardinality")); - assertCount(count, numDocs); - } - @Test public void singleValuedNumeric() throws Exception { SearchResponse response = client().prepareSearch("idx").setTypes("type") - .addAggregation(cardinality("cardinality").precisionThreshold(precisionThreshold).field(singleNumericField(false))) + .addAggregation(cardinality("cardinality").precisionThreshold(precisionThreshold).field(singleNumericField())) .execute().actionGet(); assertSearchResponse(response); @@ -229,7 +184,7 @@ public class CardinalityIT extends ESIntegTestCase { SearchResponse searchResponse = client().prepareSearch("idx").setQuery(matchAllQuery()) .addAggregation( global("global").subAggregation( - cardinality("cardinality").precisionThreshold(precisionThreshold).field(singleNumericField(false)))) + cardinality("cardinality").precisionThreshold(precisionThreshold).field(singleNumericField()))) .execute().actionGet(); assertSearchResponse(searchResponse); @@ -254,7 +209,7 @@ public class CardinalityIT extends ESIntegTestCase { @Test public void singleValuedNumericHashed() throws Exception { SearchResponse response = client().prepareSearch("idx").setTypes("type") - .addAggregation(cardinality("cardinality").precisionThreshold(precisionThreshold).field(singleNumericField(true))) + .addAggregation(cardinality("cardinality").precisionThreshold(precisionThreshold).field(singleNumericField())) .execute().actionGet(); assertSearchResponse(response); @@ -279,20 +234,6 @@ public class CardinalityIT extends ESIntegTestCase { assertCount(count, numDocs * 2); } - @Test - public void multiValuedStringHashed() throws Exception { - SearchResponse response = client().prepareSearch("idx").setTypes("type") - .addAggregation(cardinality("cardinality").precisionThreshold(precisionThreshold).field("str_values.hash")) - .execute().actionGet(); - - assertSearchResponse(response); - - Cardinality count = response.getAggregations().get("cardinality"); - assertThat(count, notNullValue()); - assertThat(count.getName(), equalTo("cardinality")); - assertCount(count, numDocs * 2); - } - @Test public void multiValuedNumeric() throws Exception { SearchResponse response = client().prepareSearch("idx").setTypes("type") @@ -356,7 +297,7 @@ public class CardinalityIT extends ESIntegTestCase { SearchResponse response = client().prepareSearch("idx").setTypes("type") .addAggregation( cardinality("cardinality").precisionThreshold(precisionThreshold).script( - new Script("doc['" + singleNumericField(false) + "'].value"))) + new Script("doc['" + singleNumericField() + "'].value"))) .execute().actionGet(); assertSearchResponse(response); @@ -417,7 +358,7 @@ public class CardinalityIT extends ESIntegTestCase { public void singleValuedNumericValueScript() throws Exception { SearchResponse response = client().prepareSearch("idx").setTypes("type") .addAggregation( - cardinality("cardinality").precisionThreshold(precisionThreshold).field(singleNumericField(false)) + cardinality("cardinality").precisionThreshold(precisionThreshold).field(singleNumericField()) .script(new Script("_value"))) .execute().actionGet(); @@ -464,23 +405,4 @@ public class CardinalityIT extends ESIntegTestCase { } } - @Test - public void asSubAggHashed() throws Exception { - SearchResponse response = client().prepareSearch("idx").setTypes("type") - .addAggregation(terms("terms").field("str_value") - .collectMode(randomFrom(SubAggCollectionMode.values())) - .subAggregation(cardinality("cardinality").precisionThreshold(precisionThreshold).field("str_values.hash"))) - .execute().actionGet(); - - assertSearchResponse(response); - - Terms terms = response.getAggregations().get("terms"); - for (Terms.Bucket bucket : terms.getBuckets()) { - Cardinality count = bucket.getAggregations().get("cardinality"); - assertThat(count, notNullValue()); - assertThat(count.getName(), equalTo("cardinality")); - assertCount(count, 2); - } - } - } diff --git a/docs/plugins/mapper-murmur3.asciidoc b/docs/plugins/mapper-murmur3.asciidoc new file mode 100644 index 00000000000..2889eb4aa68 --- /dev/null +++ b/docs/plugins/mapper-murmur3.asciidoc @@ -0,0 +1,101 @@ +[[mapper-murmur3]] +=== Mapper Murmur3 Plugin + +The mapper-murmur3 plugin provides the ability to compute hash of field values +at index-time and store them in the index. This can sometimes be helpful when +running cardinality aggregations on high-cardinality and large string fields. + +[[mapper-murmur3-install]] +[float] +==== Installation + +This plugin can be installed using the plugin manager: + +[source,sh] +---------------------------------------------------------------- +sudo bin/plugin install mapper-murmur3 +---------------------------------------------------------------- + +The plugin must be installed on every node in the cluster, and each node must +be restarted after installation. + +[[mapper-murmur3-remove]] +[float] +==== Removal + +The plugin can be removed with the following command: + +[source,sh] +---------------------------------------------------------------- +sudo bin/plugin remove mapper-murmur3 +---------------------------------------------------------------- + +The node must be stopped before removing the plugin. + +[[mapper-murmur3-usage]] +==== Using the `murmur3` field + +The `murmur3` is typically used within a multi-field, so that both the original +value and its hash are stored in the index: + +[source,js] +-------------------------- +PUT my_index +{ + "mappings": { + "my_type": { + "properties": { + "my_field": { + "type": "string", + "fields": { + "hash": { + "type": "murmur3" + } + } + } + } + } + } +} +-------------------------- +// AUTOSENSE + +Such a mapping would allow to refer to `my_field.hash` in order to get hashes +of the values of the `my_field` field. This is only useful in order to run +`cardinality` aggregations: + +[source,js] +-------------------------- +# Example documents +PUT my_index/my_type/1 +{ + "my_field": "This is a document" +} + +PUT my_index/my_type/2 +{ + "my_field": "This is another document" +} + +GET my_index/_search +{ + "aggs": { + "my_field_cardinality": { + "cardinality": { + "field": "my_field.hash" <1> + } + } + } +} +-------------------------- +// AUTOSENSE + +<1> Counting unique values on the `my_field.hash` field + +Running a `cardinality` aggregation on the `my_field` field directly would +yield the same result, however using `my_field.hash` instead might result in +a speed-up if the field has a high-cardinality. On the other hand, it is +discouraged to use the `murmur3` field on numeric fields and string fields +that are not almost unique as the use of a `murmur3` field is unlikely to +bring significant speed-ups, while increasing the amount of disk space required +to store the index. diff --git a/docs/plugins/mapper.asciidoc b/docs/plugins/mapper.asciidoc index 1d9996d9818..226fc4e40d0 100644 --- a/docs/plugins/mapper.asciidoc +++ b/docs/plugins/mapper.asciidoc @@ -14,5 +14,10 @@ The mapper-size plugin provides the `_size` meta field which, when enabled, indexes the size in bytes of the original {ref}/mapping-source-field.html[`_source`] field. -include::mapper-size.asciidoc[] +<>:: +The mapper-murmur3 plugin allows hashes to be computed at index-time and stored +in the index for later use with the `cardinality` aggregation. + +include::mapper-size.asciidoc[] +include::mapper-murmur3.asciidoc[] diff --git a/docs/reference/aggregations/metrics/cardinality-aggregation.asciidoc b/docs/reference/aggregations/metrics/cardinality-aggregation.asciidoc index 0b484288b1c..1eb0c08772f 100644 --- a/docs/reference/aggregations/metrics/cardinality-aggregation.asciidoc +++ b/docs/reference/aggregations/metrics/cardinality-aggregation.asciidoc @@ -23,9 +23,9 @@ match a query: ==== Precision control -This aggregation also supports the `precision_threshold` and `rehash` options: +This aggregation also supports the `precision_threshold` option: -experimental[The `precision_threshold` and `rehash` options are specific to the current internal implementation of the `cardinality` agg, which may change in the future] +experimental[The `precision_threshold` option is specific to the current internal implementation of the `cardinality` agg, which may change in the future] [source,js] -------------------------------------------------- @@ -34,8 +34,7 @@ experimental[The `precision_threshold` and `rehash` options are specific to the "author_count" : { "cardinality" : { "field" : "author_hash", - "precision_threshold": 100, <1> - "rehash": false <2> + "precision_threshold": 100 <1> } } } @@ -49,11 +48,6 @@ supported value is 40000, thresholds above this number will have the same effect as a threshold of 40000. Default value depends on the number of parent aggregations that multiple create buckets (such as terms or histograms). -<2> If you computed a hash on client-side, stored it into your documents and want -Elasticsearch to use them to compute counts using this hash function without -rehashing values, it is possible to specify `rehash: false`. Default value is -`true`. Please note that the hash must be indexed as a long when `rehash` is -false. ==== Counts are approximate @@ -86,47 +80,11 @@ counting millions of items. ==== Pre-computed hashes -If you don't want Elasticsearch to re-compute hashes on every run of this -aggregation, it is possible to use pre-computed hashes, either by computing a -hash on client-side, indexing it and specifying `rehash: false`, or by using -the special `murmur3` field mapper, typically in the context of a `multi-field` -in the mapping: - -[source,js] --------------------------------------------------- -{ - "author": { - "type": "string", - "fields": { - "hash": { - "type": "murmur3" - } - } - } -} --------------------------------------------------- - -With such a mapping, Elasticsearch is going to compute hashes of the `author` -field at indexing time and store them in the `author.hash` field. This -way, unique counts can be computed using the cardinality aggregation by only -loading the hashes into memory, not the values of the `author` field, and -without computing hashes on the fly: - -[source,js] --------------------------------------------------- -{ - "aggs" : { - "author_count" : { - "cardinality" : { - "field" : "author.hash" - } - } - } -} --------------------------------------------------- - -NOTE: `rehash` is automatically set to `false` when computing unique counts on -a `murmur3` field. +On string fields that have a high cardinality, it might be faster to store the +hash of your field values in your index and then run the cardinality aggregation +on this field. This can either be done by providing hash values from client-side +or by letting elasticsearch compute hash values for you by using the +{plugins}/mapper-size.html[`mapper-murmur3`] plugin. NOTE: Pre-computing hashes is usually only useful on very large and/or high-cardinality fields as it saves CPU and memory. However, on numeric diff --git a/docs/reference/mapping/types.asciidoc b/docs/reference/mapping/types.asciidoc index abefda42085..52cd41e37e5 100644 --- a/docs/reference/mapping/types.asciidoc +++ b/docs/reference/mapping/types.asciidoc @@ -33,6 +33,7 @@ document: <>:: `completion` to provide auto-complete suggestions <>:: `token_count` to count the number of tokens in a string +{plugins}/mapper-size.html[`mapper-murmur3`]:: `murmur3` to compute hashes of values at index-time and store them in the index Attachment datatype:: diff --git a/docs/reference/migration/migrate_2_0/removals.asciidoc b/docs/reference/migration/migrate_2_0/removals.asciidoc index 60f7422876c..ab764691042 100644 --- a/docs/reference/migration/migrate_2_0/removals.asciidoc +++ b/docs/reference/migration/migrate_2_0/removals.asciidoc @@ -41,6 +41,16 @@ can install the plugin with: The `_shutdown` API has been removed without a replacement. Nodes should be managed via the operating system and the provided start/stop scripts. +==== `murmur3` is now a plugin + +The `murmur3` field, which indexes hashes of the field values, has been moved +out of core and is available as a plugin. It can be installed as: + +[source,sh] +------------------ +./bin/plugin install mapper-murmur3 +------------------ + ==== `_size` is now a plugin The `_size` meta-data field, which indexes the size in bytes of the original diff --git a/plugins/mapper-murmur3/licenses/no_deps.txt b/plugins/mapper-murmur3/licenses/no_deps.txt new file mode 100644 index 00000000000..8cce254d037 --- /dev/null +++ b/plugins/mapper-murmur3/licenses/no_deps.txt @@ -0,0 +1 @@ +This plugin has no third party dependencies diff --git a/plugins/mapper-murmur3/pom.xml b/plugins/mapper-murmur3/pom.xml new file mode 100644 index 00000000000..9d4c7db0000 --- /dev/null +++ b/plugins/mapper-murmur3/pom.xml @@ -0,0 +1,43 @@ + + + + + 4.0.0 + + + org.elasticsearch.plugin + elasticsearch-plugin + 2.1.0-SNAPSHOT + + + elasticsearch-mapper-murmur3 + Elasticsearch Mapper Murmur3 plugin + The Mapper Murmur3 plugin allows to compute hashes of a field's values at index-time and to store them in the index. + + + org.elasticsearch.plugin.mapper.MapperMurmur3Plugin + mapper_murmur3 + false + + + + + + org.apache.maven.plugins + maven-assembly-plugin + + + + + diff --git a/plugins/mapper-murmur3/rest-api-spec/test/mapper_murmur3/10_basic.yaml b/plugins/mapper-murmur3/rest-api-spec/test/mapper_murmur3/10_basic.yaml new file mode 100644 index 00000000000..4ed879c7192 --- /dev/null +++ b/plugins/mapper-murmur3/rest-api-spec/test/mapper_murmur3/10_basic.yaml @@ -0,0 +1,65 @@ +# Integration tests for Mapper Murmur3 components +# + +--- +"Mapper Murmur3": + + - do: + indices.create: + index: test + body: + mappings: + type1: { "properties": { "foo": { "type": "string", "fields": { "hash": { "type": "murmur3" } } } } } + + - do: + index: + index: test + type: type1 + id: 0 + body: { "foo": null } + + - do: + indices.refresh: {} + + - do: + search: + body: { "aggs": { "foo_count": { "cardinality": { "field": "foo.hash" } } } } + + - match: { aggregations.foo_count.value: 0 } + + - do: + index: + index: test + type: type1 + id: 1 + body: { "foo": "bar" } + + - do: + index: + index: test + type: type1 + id: 2 + body: { "foo": "baz" } + + - do: + index: + index: test + type: type1 + id: 3 + body: { "foo": "quux" } + + - do: + index: + index: test + type: type1 + id: 4 + body: { "foo": "bar" } + + - do: + indices.refresh: {} + + - do: + search: + body: { "aggs": { "foo_count": { "cardinality": { "field": "foo.hash" } } } } + + - match: { aggregations.foo_count.value: 3 } diff --git a/core/src/main/java/org/elasticsearch/index/mapper/core/Murmur3FieldMapper.java b/plugins/mapper-murmur3/src/main/java/org/elasticsearch/index/mapper/murmur3/Murmur3FieldMapper.java similarity index 85% rename from core/src/main/java/org/elasticsearch/index/mapper/core/Murmur3FieldMapper.java rename to plugins/mapper-murmur3/src/main/java/org/elasticsearch/index/mapper/murmur3/Murmur3FieldMapper.java index 5e7b664aceb..288a40471e7 100644 --- a/core/src/main/java/org/elasticsearch/index/mapper/core/Murmur3FieldMapper.java +++ b/plugins/mapper-murmur3/src/main/java/org/elasticsearch/index/mapper/murmur3/Murmur3FieldMapper.java @@ -17,9 +17,10 @@ * under the License. */ -package org.elasticsearch.index.mapper.core; +package org.elasticsearch.index.mapper.murmur3; import org.apache.lucene.document.Field; +import org.apache.lucene.index.IndexOptions; import org.apache.lucene.util.BytesRef; import org.elasticsearch.Version; import org.elasticsearch.common.Explicit; @@ -31,12 +32,13 @@ import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.Mapper; import org.elasticsearch.index.mapper.MapperParsingException; import org.elasticsearch.index.mapper.ParseContext; +import org.elasticsearch.index.mapper.core.LongFieldMapper; +import org.elasticsearch.index.mapper.core.NumberFieldMapper; import java.io.IOException; import java.util.List; import java.util.Map; -import static org.elasticsearch.index.mapper.MapperBuilders.murmur3Field; import static org.elasticsearch.index.mapper.core.TypeParsers.parseNumberField; public class Murmur3FieldMapper extends LongFieldMapper { @@ -45,6 +47,9 @@ public class Murmur3FieldMapper extends LongFieldMapper { public static class Defaults extends LongFieldMapper.Defaults { public static final MappedFieldType FIELD_TYPE = new Murmur3FieldType(); + static { + FIELD_TYPE.freeze(); + } } public static class Builder extends NumberFieldMapper.Builder { @@ -65,6 +70,17 @@ public class Murmur3FieldMapper extends LongFieldMapper { return fieldMapper; } + @Override + protected void setupFieldType(BuilderContext context) { + super.setupFieldType(context); + if (context.indexCreatedVersion().onOrAfter(Version.V_2_0_0)) { + fieldType.setIndexOptions(IndexOptions.NONE); + defaultFieldType.setIndexOptions(IndexOptions.NONE); + fieldType.setHasDocValues(true); + defaultFieldType.setHasDocValues(true); + } + } + @Override protected NamedAnalyzer makeNumberAnalyzer(int precisionStep) { return NumericLongAnalyzer.buildNamedAnalyzer(precisionStep); @@ -80,7 +96,7 @@ public class Murmur3FieldMapper extends LongFieldMapper { @Override @SuppressWarnings("unchecked") public Mapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { - Builder builder = murmur3Field(name); + Builder builder = new Builder(name); // tweaking these settings is no longer allowed, the entire purpose of murmur3 fields is to store a hash if (parserContext.indexVersionCreated().onOrAfter(Version.V_2_0_0_beta1)) { @@ -92,6 +108,10 @@ public class Murmur3FieldMapper extends LongFieldMapper { } } + if (parserContext.indexVersionCreated().before(Version.V_2_0_0)) { + builder.indexOptions(IndexOptions.DOCS); + } + parseNumberField(builder, name, node, parserContext); // Because this mapper extends LongFieldMapper the null_value field will be added to the JSON when transferring cluster state // between nodes so we have to remove the entry here so that the validation doesn't fail @@ -104,7 +124,8 @@ public class Murmur3FieldMapper extends LongFieldMapper { // this only exists so a check can be done to match the field type to using murmur3 hashing... public static class Murmur3FieldType extends LongFieldMapper.LongFieldType { - public Murmur3FieldType() {} + public Murmur3FieldType() { + } protected Murmur3FieldType(Murmur3FieldType ref) { super(ref); diff --git a/plugins/mapper-murmur3/src/main/java/org/elasticsearch/index/mapper/murmur3/RegisterMurmur3FieldMapper.java b/plugins/mapper-murmur3/src/main/java/org/elasticsearch/index/mapper/murmur3/RegisterMurmur3FieldMapper.java new file mode 100644 index 00000000000..5a6a71222c0 --- /dev/null +++ b/plugins/mapper-murmur3/src/main/java/org/elasticsearch/index/mapper/murmur3/RegisterMurmur3FieldMapper.java @@ -0,0 +1,36 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.mapper.murmur3; + +import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.AbstractIndexComponent; +import org.elasticsearch.index.Index; +import org.elasticsearch.index.mapper.MapperService; + +public class RegisterMurmur3FieldMapper extends AbstractIndexComponent { + + @Inject + public RegisterMurmur3FieldMapper(Index index, Settings indexSettings, MapperService mapperService) { + super(index, indexSettings); + mapperService.documentMapperParser().putTypeParser(Murmur3FieldMapper.CONTENT_TYPE, new Murmur3FieldMapper.TypeParser()); + } + +} diff --git a/plugins/mapper-murmur3/src/main/java/org/elasticsearch/plugin/mapper/MapperMurmur3IndexModule.java b/plugins/mapper-murmur3/src/main/java/org/elasticsearch/plugin/mapper/MapperMurmur3IndexModule.java new file mode 100644 index 00000000000..51054d774bd --- /dev/null +++ b/plugins/mapper-murmur3/src/main/java/org/elasticsearch/plugin/mapper/MapperMurmur3IndexModule.java @@ -0,0 +1,31 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.plugin.mapper; + +import org.elasticsearch.common.inject.AbstractModule; +import org.elasticsearch.index.mapper.murmur3.RegisterMurmur3FieldMapper; + +public class MapperMurmur3IndexModule extends AbstractModule { + + @Override + protected void configure() { + bind(RegisterMurmur3FieldMapper.class).asEagerSingleton(); + } +} diff --git a/plugins/mapper-murmur3/src/main/java/org/elasticsearch/plugin/mapper/MapperMurmur3Plugin.java b/plugins/mapper-murmur3/src/main/java/org/elasticsearch/plugin/mapper/MapperMurmur3Plugin.java new file mode 100644 index 00000000000..9b6611decde --- /dev/null +++ b/plugins/mapper-murmur3/src/main/java/org/elasticsearch/plugin/mapper/MapperMurmur3Plugin.java @@ -0,0 +1,45 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.plugin.mapper; + +import org.elasticsearch.common.inject.Module; +import org.elasticsearch.plugins.AbstractPlugin; + +import java.util.Collection; +import java.util.Collections; + +public class MapperMurmur3Plugin extends AbstractPlugin { + + @Override + public String name() { + return "mapper-murmur3"; + } + + @Override + public String description() { + return "A mapper that allows to precompute murmur3 hashes of values at index-time and store them in the index"; + } + + @Override + public Collection> indexModules() { + return Collections.>singleton(MapperMurmur3IndexModule.class); + } + +} diff --git a/plugins/mapper-murmur3/src/test/java/org/elasticsearch/index/mapper/murmur3/MapperMurmur3RestIT.java b/plugins/mapper-murmur3/src/test/java/org/elasticsearch/index/mapper/murmur3/MapperMurmur3RestIT.java new file mode 100644 index 00000000000..f440a343bc6 --- /dev/null +++ b/plugins/mapper-murmur3/src/test/java/org/elasticsearch/index/mapper/murmur3/MapperMurmur3RestIT.java @@ -0,0 +1,42 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.index.mapper.murmur3; + +import com.carrotsearch.randomizedtesting.annotations.Name; +import com.carrotsearch.randomizedtesting.annotations.ParametersFactory; + +import org.elasticsearch.test.rest.ESRestTestCase; +import org.elasticsearch.test.rest.RestTestCandidate; +import org.elasticsearch.test.rest.parser.RestTestParseException; + +import java.io.IOException; + +public class MapperMurmur3RestIT extends ESRestTestCase { + + public MapperMurmur3RestIT(@Name("yaml") RestTestCandidate testCandidate) { + super(testCandidate); + } + + @ParametersFactory + public static Iterable parameters() throws IOException, RestTestParseException { + return createParameters(0, 1); + } +} + diff --git a/core/src/test/java/org/elasticsearch/index/mapper/core/Murmur3FieldMapperTests.java b/plugins/mapper-murmur3/src/test/java/org/elasticsearch/index/mapper/murmur3/Murmur3FieldMapperTests.java similarity index 80% rename from core/src/test/java/org/elasticsearch/index/mapper/core/Murmur3FieldMapperTests.java rename to plugins/mapper-murmur3/src/test/java/org/elasticsearch/index/mapper/murmur3/Murmur3FieldMapperTests.java index d4cfcb8ccde..676a5c4c1cb 100644 --- a/core/src/test/java/org/elasticsearch/index/mapper/core/Murmur3FieldMapperTests.java +++ b/plugins/mapper-murmur3/src/test/java/org/elasticsearch/index/mapper/murmur3/Murmur3FieldMapperTests.java @@ -17,9 +17,11 @@ * under the License. */ -package org.elasticsearch.index.mapper.core; +package org.elasticsearch.index.mapper.murmur3; +import org.apache.lucene.index.DocValuesType; import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.IndexableField; import org.elasticsearch.Version; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.settings.Settings; @@ -28,9 +30,12 @@ import org.elasticsearch.index.IndexService; import org.elasticsearch.index.mapper.DocumentMapper; import org.elasticsearch.index.mapper.DocumentMapperParser; import org.elasticsearch.index.mapper.MapperParsingException; +import org.elasticsearch.index.mapper.ParsedDocument; import org.elasticsearch.test.ESSingleNodeTestCase; import org.junit.Before; +import java.util.Arrays; + public class Murmur3FieldMapperTests extends ESSingleNodeTestCase { IndexService indexService; @@ -40,6 +45,22 @@ public class Murmur3FieldMapperTests extends ESSingleNodeTestCase { public void before() { indexService = createIndex("test"); parser = indexService.mapperService().documentMapperParser(); + parser.putTypeParser(Murmur3FieldMapper.CONTENT_TYPE, new Murmur3FieldMapper.TypeParser()); + } + + public void testDefaults() throws Exception { + String mapping = XContentFactory.jsonBuilder().startObject().startObject("type") + .startObject("properties").startObject("field") + .field("type", "murmur3") + .endObject().endObject().endObject().endObject().string(); + DocumentMapper mapper = parser.parse(mapping); + ParsedDocument parsedDoc = mapper.parse("test", "type", "1", XContentFactory.jsonBuilder().startObject().field("field", "value").endObject().bytes()); + IndexableField[] fields = parsedDoc.rootDoc().getFields("field"); + assertNotNull(fields); + assertEquals(Arrays.toString(fields), 1, fields.length); + IndexableField field = fields[0]; + assertEquals(IndexOptions.NONE, field.fieldType().indexOptions()); + assertEquals(DocValuesType.SORTED_NUMERIC, field.fieldType().docValuesType()); } public void testDocValuesSettingNotAllowed() throws Exception { @@ -100,6 +121,7 @@ public class Murmur3FieldMapperTests extends ESSingleNodeTestCase { Settings settings = Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_1_4_2.id).build(); indexService = createIndex("test_bwc", settings); parser = indexService.mapperService().documentMapperParser(); + parser.putTypeParser(Murmur3FieldMapper.CONTENT_TYPE, new Murmur3FieldMapper.TypeParser()); String mapping = XContentFactory.jsonBuilder().startObject().startObject("type") .startObject("properties").startObject("field") .field("type", "murmur3") @@ -115,6 +137,7 @@ public class Murmur3FieldMapperTests extends ESSingleNodeTestCase { Settings settings = Settings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.V_1_4_2.id).build(); indexService = createIndex("test_bwc", settings); parser = indexService.mapperService().documentMapperParser(); + parser.putTypeParser(Murmur3FieldMapper.CONTENT_TYPE, new Murmur3FieldMapper.TypeParser()); String mapping = XContentFactory.jsonBuilder().startObject().startObject("type") .startObject("properties").startObject("field") .field("type", "murmur3") diff --git a/plugins/pom.xml b/plugins/pom.xml index 78e3d65aa0d..242bf84ad5e 100644 --- a/plugins/pom.xml +++ b/plugins/pom.xml @@ -436,6 +436,7 @@ delete-by-query lang-python lang-javascript + mapper-murmur3 mapper-size jvm-example site-example diff --git a/qa/smoke-test-plugins/pom.xml b/qa/smoke-test-plugins/pom.xml index d479b16b7f0..b56bc35e57b 100644 --- a/qa/smoke-test-plugins/pom.xml +++ b/qa/smoke-test-plugins/pom.xml @@ -333,6 +333,14 @@ true + + org.elasticsearch.plugin + elasticsearch-mapper-murmur3 + ${elasticsearch.version} + zip + true + + org.elasticsearch.plugin elasticsearch-mapper-size From d13078546a0b5fb2dd63fd449d702ab88b768315 Mon Sep 17 00:00:00 2001 From: Clinton Gormley Date: Tue, 18 Aug 2015 12:16:49 +0200 Subject: [PATCH 10/39] Docs: Fixed malforme table in geo-polygon query --- docs/reference/query-dsl/geo-polygon-query.asciidoc | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/reference/query-dsl/geo-polygon-query.asciidoc b/docs/reference/query-dsl/geo-polygon-query.asciidoc index d778999c251..ff53a351b67 100644 --- a/docs/reference/query-dsl/geo-polygon-query.asciidoc +++ b/docs/reference/query-dsl/geo-polygon-query.asciidoc @@ -39,6 +39,7 @@ standard -180:180 / -90:90 coordinate system. (default is `false`). |`ignore_malformed` |Set to `true` to accept geo points with invalid latitude or longitude (default is `false`). +|======================================================================= [float] ==== Allowed Formats From 0b5a027d6ac051e8be06a10c670972ab0495089f Mon Sep 17 00:00:00 2001 From: Clinton Gormley Date: Tue, 18 Aug 2015 12:20:00 +0200 Subject: [PATCH 11/39] Docs: Fixed bad ID in geo bound box --- docs/reference/query-dsl/geo-bounding-box-query.asciidoc | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/docs/reference/query-dsl/geo-bounding-box-query.asciidoc b/docs/reference/query-dsl/geo-bounding-box-query.asciidoc index ea14c58e212..60aab15b54a 100644 --- a/docs/reference/query-dsl/geo-bounding-box-query.asciidoc +++ b/docs/reference/query-dsl/geo-bounding-box-query.asciidoc @@ -59,7 +59,7 @@ standard -180:180 / -90:90 coordinate system. (default is `false`). accept geo points with invalid latitude or longitude (default is `false`). |`type` |Set to one of `indexed` or `memory` to defines whether this filter will -be executed in memory or indexed. See <> below for further details +be executed in memory or indexed. See <> below for further details Default is `memory`. |======================================================================= @@ -214,6 +214,7 @@ a single location / point matches the filter, the document will be included in the filter [float] +[[geo-bbox-type]] ==== Type The type of the bounding box execution by default is set to `memory`, From d7bf510fe03d062a6601fa9af196dce67f43e80f Mon Sep 17 00:00:00 2001 From: kakakakakku Date: Tue, 18 Aug 2015 15:35:04 +0900 Subject: [PATCH 12/39] Fixed section name and api name in docs --- docs/reference/docs/update.asciidoc | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/docs/reference/docs/update.asciidoc b/docs/reference/docs/update.asciidoc index ada9e283a45..06958b4e624 100644 --- a/docs/reference/docs/update.asciidoc +++ b/docs/reference/docs/update.asciidoc @@ -114,7 +114,7 @@ If both `doc` and `script` is specified, then `doc` is ignored. Best is to put your field pairs of the partial document in the script itself. [float] -=== `detect_noop` +=== Detecting noop By default if `doc` is specified then the document is always updated even if the merging process doesn't cause any changes. Specifying `detect_noop` @@ -247,12 +247,10 @@ return the full updated source. `version` & `version_type`:: -The Update API uses the Elasticsearch's versioning support internally to make +The update API uses the Elasticsearch's versioning support internally to make sure the document doesn't change during the update. You can use the `version` parameter to specify that the document should only be updated if it's version matches the one specified. By setting version type to `force` you can force the new version of the document after update (use with care! with `force` there is no guarantee the document didn't change).Version types `external` & `external_gte` are not supported. - - From a72adbf0b4c4acaaa60635de44d99a82f9ffd3a1 Mon Sep 17 00:00:00 2001 From: Adrien Grand Date: Tue, 18 Aug 2015 12:30:51 +0200 Subject: [PATCH 13/39] Fix mapper-murmur3 compatibility version. --- .../index/mapper/murmur3/Murmur3FieldMapper.java | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/plugins/mapper-murmur3/src/main/java/org/elasticsearch/index/mapper/murmur3/Murmur3FieldMapper.java b/plugins/mapper-murmur3/src/main/java/org/elasticsearch/index/mapper/murmur3/Murmur3FieldMapper.java index 288a40471e7..60c31c3f765 100644 --- a/plugins/mapper-murmur3/src/main/java/org/elasticsearch/index/mapper/murmur3/Murmur3FieldMapper.java +++ b/plugins/mapper-murmur3/src/main/java/org/elasticsearch/index/mapper/murmur3/Murmur3FieldMapper.java @@ -73,7 +73,7 @@ public class Murmur3FieldMapper extends LongFieldMapper { @Override protected void setupFieldType(BuilderContext context) { super.setupFieldType(context); - if (context.indexCreatedVersion().onOrAfter(Version.V_2_0_0)) { + if (context.indexCreatedVersion().onOrAfter(Version.V_2_0_0_beta1)) { fieldType.setIndexOptions(IndexOptions.NONE); defaultFieldType.setIndexOptions(IndexOptions.NONE); fieldType.setHasDocValues(true); @@ -108,7 +108,7 @@ public class Murmur3FieldMapper extends LongFieldMapper { } } - if (parserContext.indexVersionCreated().before(Version.V_2_0_0)) { + if (parserContext.indexVersionCreated().before(Version.V_2_0_0_beta1)) { builder.indexOptions(IndexOptions.DOCS); } From e74f559fd4cb5b80a62c769517b53bddb91be53e Mon Sep 17 00:00:00 2001 From: Martijn van Groningen Date: Tue, 18 Aug 2015 12:27:47 +0200 Subject: [PATCH 14/39] parent/child: Explicitly disabled the query cache It was already disabled, but during tests the test framework enabled the query cache by setting the static default query cache. The caching behaviour should be the same in production and in tests. --- .../java/org/elasticsearch/index/query/HasChildQueryParser.java | 1 + 1 file changed, 1 insertion(+) diff --git a/core/src/main/java/org/elasticsearch/index/query/HasChildQueryParser.java b/core/src/main/java/org/elasticsearch/index/query/HasChildQueryParser.java index c5388b16557..d911005eb89 100644 --- a/core/src/main/java/org/elasticsearch/index/query/HasChildQueryParser.java +++ b/core/src/main/java/org/elasticsearch/index/query/HasChildQueryParser.java @@ -258,6 +258,7 @@ public class HasChildQueryParser implements QueryParser { String joinField = ParentFieldMapper.joinField(parentType); IndexReader indexReader = searchContext.searcher().getIndexReader(); IndexSearcher indexSearcher = new IndexSearcher(indexReader); + indexSearcher.setQueryCache(null); IndexParentChildFieldData indexParentChildFieldData = parentChildIndexFieldData.loadGlobal(indexReader); MultiDocValues.OrdinalMap ordinalMap = ParentChildIndexFieldData.getOrdinalMap(indexParentChildFieldData, parentType); return JoinUtil.createJoinQuery(joinField, innerQuery, toQuery, indexSearcher, scoreMode, ordinalMap, minChildren, maxChildren); From 975eb60a1247066bcdfdb2fbd07dfa40892e15d0 Mon Sep 17 00:00:00 2001 From: David Pilato Date: Tue, 18 Aug 2015 11:53:49 +0200 Subject: [PATCH 15/39] [doc] we don't use `check_lucene` anymore in plugins --- docs/plugins/plugin-script.asciidoc | 15 --------------- 1 file changed, 15 deletions(-) diff --git a/docs/plugins/plugin-script.asciidoc b/docs/plugins/plugin-script.asciidoc index ce6bbf6d955..06263d730ad 100644 --- a/docs/plugins/plugin-script.asciidoc +++ b/docs/plugins/plugin-script.asciidoc @@ -223,18 +223,3 @@ plugin.mandatory: mapper-attachments,lang-groovy For safety reasons, a node will not start if it is missing a mandatory plugin. -[float] -=== Lucene version dependent plugins - -For some plugins, such as analysis plugins, a specific major Lucene version is -required to run. In that case, the plugin provides in its -`es-plugin.properties` file the Lucene version for which the plugin was built for. - -If present at startup the node will check the Lucene version before loading -the plugin. You can disable that check using - -[source,yaml] --------------------------------------------------- -plugins.check_lucene: false --------------------------------------------------- - From 44e6d1aac65c5a8667bd78ee2a623118406f1866 Mon Sep 17 00:00:00 2001 From: David Pilato Date: Tue, 18 Aug 2015 13:00:04 +0200 Subject: [PATCH 16/39] [doc] Move mapper attachment plugin to mapper page --- docs/plugins/api.asciidoc | 6 ------ docs/plugins/mapper.asciidoc | 6 ++++++ 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/plugins/api.asciidoc b/docs/plugins/api.asciidoc index 343d246ae2d..9e3b8f34da4 100644 --- a/docs/plugins/api.asciidoc +++ b/docs/plugins/api.asciidoc @@ -15,12 +15,6 @@ The delete by query plugin adds support for deleting all of the documents replacement for the problematic _delete-by-query_ functionality which has been removed from Elasticsearch core. -https://github.com/elasticsearch/elasticsearch-mapper-attachments[Mapper Attachments Type plugin]:: - -Integrates http://lucene.apache.org/tika/[Apache Tika] to provide a new field -type `attachment` to allow indexing of documents such as PDFs and Microsoft -Word. - [float] === Community contributed API extension plugins diff --git a/docs/plugins/mapper.asciidoc b/docs/plugins/mapper.asciidoc index 226fc4e40d0..c6a3a7b35aa 100644 --- a/docs/plugins/mapper.asciidoc +++ b/docs/plugins/mapper.asciidoc @@ -8,6 +8,12 @@ Mapper plugins allow new field datatypes to be added to Elasticsearch. The core mapper plugins are: +https://github.com/elasticsearch/elasticsearch-mapper-attachments[Mapper Attachments Type plugin]:: + +Integrates http://lucene.apache.org/tika/[Apache Tika] to provide a new field +type `attachment` to allow indexing of documents such as PDFs and Microsoft +Word. + <>:: The mapper-size plugin provides the `_size` meta field which, when enabled, From 3f04ee076e605dbf28c75f79bf08988ddc18079b Mon Sep 17 00:00:00 2001 From: javanna Date: Tue, 18 Aug 2015 13:04:30 +0200 Subject: [PATCH 17/39] Internal: IndicesQueriesRegitry back to being created only once With #12921 we refactored IndicesModule but we forgot to make sure we create IndicesQueriesRegistry once. IndicesQueriesModule used to do `bind(IndicesQueriesRegistry.class).asEagerSingleton();` otherwise we get multiple instances of the registry. This needs to be ported do the IndicesModule. --- .../main/java/org/elasticsearch/indices/IndicesModule.java | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/indices/IndicesModule.java b/core/src/main/java/org/elasticsearch/indices/IndicesModule.java index 2704478ed42..759c9e5e150 100644 --- a/core/src/main/java/org/elasticsearch/indices/IndicesModule.java +++ b/core/src/main/java/org/elasticsearch/indices/IndicesModule.java @@ -24,7 +24,6 @@ import org.elasticsearch.action.update.UpdateHelper; import org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService; import org.elasticsearch.common.geo.ShapesAvailability; import org.elasticsearch.common.inject.AbstractModule; -import org.elasticsearch.common.inject.multibindings.MapBinder; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.util.ExtensionPoint; import org.elasticsearch.index.query.*; @@ -38,6 +37,7 @@ import org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache; import org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCacheListener; import org.elasticsearch.indices.flush.SyncedFlushService; import org.elasticsearch.indices.memory.IndexingMemoryController; +import org.elasticsearch.indices.query.IndicesQueriesRegistry; import org.elasticsearch.indices.recovery.RecoverySettings; import org.elasticsearch.indices.recovery.RecoverySource; import org.elasticsearch.indices.recovery.RecoveryTarget; @@ -45,10 +45,6 @@ import org.elasticsearch.indices.store.IndicesStore; import org.elasticsearch.indices.store.TransportNodesListShardStoreMetaData; import org.elasticsearch.indices.ttl.IndicesTTLService; -import java.security.cert.Extension; -import java.util.HashMap; -import java.util.Map; - /** * Configures classes and services that are shared by indices on each node. */ @@ -159,6 +155,7 @@ public class IndicesModule extends AbstractModule { protected void bindQueryParsersExtension() { queryParsers.bind(binder()); + bind(IndicesQueriesRegistry.class).asEagerSingleton(); } protected void bindHunspellExtension() { From 717a6dd092711d3c5f6f50fdcfe7a9edc473a6ef Mon Sep 17 00:00:00 2001 From: David Pilato Date: Tue, 18 Aug 2015 13:12:58 +0200 Subject: [PATCH 18/39] [doc] Backport change in cloud-aws doc Related to https://github.com/elastic/elasticsearch/pull/12761 --- docs/plugins/cloud-aws.asciidoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/plugins/cloud-aws.asciidoc b/docs/plugins/cloud-aws.asciidoc index 34cac7976f7..7b0fee374e3 100644 --- a/docs/plugins/cloud-aws.asciidoc +++ b/docs/plugins/cloud-aws.asciidoc @@ -259,8 +259,8 @@ The following settings are supported: `base_path`:: - Specifies the path within bucket to repository data. Defaults to root - directory. + Specifies the path within bucket to repository data. Defaults to + value of `repositories.s3.base_path` or to root directory if not set. `access_key`:: From 05678bc10ae61bba29db33ec76cad97b3d66226a Mon Sep 17 00:00:00 2001 From: David Pilato Date: Tue, 18 Aug 2015 13:18:39 +0200 Subject: [PATCH 19/39] [doc] Fix cloud-azure install / remove instructions --- docs/plugins/cloud-azure.asciidoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/plugins/cloud-azure.asciidoc b/docs/plugins/cloud-azure.asciidoc index 151cd3380c6..80fd189ad2e 100644 --- a/docs/plugins/cloud-azure.asciidoc +++ b/docs/plugins/cloud-azure.asciidoc @@ -13,7 +13,7 @@ This plugin can be installed using the plugin manager: [source,sh] ---------------------------------------------------------------- -sudo bin/plugin install cloud-aws +sudo bin/plugin install cloud-azure ---------------------------------------------------------------- The plugin must be installed on every node in the cluster, and each node must @@ -27,7 +27,7 @@ The plugin can be removed with the following command: [source,sh] ---------------------------------------------------------------- -sudo bin/plugin remove cloud-aws +sudo bin/plugin remove cloud-azure ---------------------------------------------------------------- The node must be stopped before removing the plugin. From da65493965bc49cb10ddf6ae3bf69784be6c1a10 Mon Sep 17 00:00:00 2001 From: David Pilato Date: Thu, 13 Aug 2015 12:47:22 +0200 Subject: [PATCH 20/39] [qa] multinode tests fails when you run low on disk space (85%) Indeed, we check within the test suite that we have not unassigned shards. But when the test starts on my machine I get: ``` [elasticsearch] [2015-08-13 12:03:18,801][INFO ][org.elasticsearch.cluster.routing.allocation.decider] [Kehl of Tauran] low disk watermark [85%] exceeded on [eLujVjWAQ8OHdhscmaf0AQ][Jackhammer] free: 59.8gb[12.8%], replicas will not be assigned to this node ``` ``` 2> REPRODUCE WITH: mvn verify -Pdev -Dskip.unit.tests -Dtests.seed=2AE3A3B7B13CE3D6 -Dtests.class=org.elasticsearch.smoketest.SmokeTestMultiIT -Dtests.method="test {yaml=smoke_test_multinode/10_basic/cluster health basic test, one index}" -Des.logger.level=ERROR -Dtests.assertion.disabled=false -Dtests.security.manager=true -Dtests.heap.size=512m -Dtests.locale=ar_YE -Dtests.timezone=Asia/Hong_Kong -Dtests.rest.suite=smoke_test_multinode FAILURE 38.5s | SmokeTestMultiIT.test {yaml=smoke_test_multinode/10_basic/cluster health basic test, one index} <<< > Throwable #1: java.lang.AssertionError: expected [2xx] status code but api [cluster.health] returned [408 Request Timeout] [{"cluster_name":"prepare_release","status":"yellow","timed_out":true,"number_of_nodes":2,"number_of_data_nodes":2,"active_primary_shards":3,"active_shards":3,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":3,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":50.0}] ``` We don't check anymore if we have unassigned shards and we wait for `yellow` status instead of `green`. Closes #12852. --- .../rest-api-spec/test/smoke_test_multinode/10_basic.yaml | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/qa/smoke-test-multinode/rest-api-spec/test/smoke_test_multinode/10_basic.yaml b/qa/smoke-test-multinode/rest-api-spec/test/smoke_test_multinode/10_basic.yaml index 74066ebf6b1..c6b50b6d38a 100644 --- a/qa/smoke-test-multinode/rest-api-spec/test/smoke_test_multinode/10_basic.yaml +++ b/qa/smoke-test-multinode/rest-api-spec/test/smoke_test_multinode/10_basic.yaml @@ -1,5 +1,6 @@ # Integration tests for smoke testing multi-node IT -# +# If the local machine which is running the test is low on disk space +# We can have one unassigned shard --- "cluster health basic test, one index": - do: @@ -12,7 +13,7 @@ - do: cluster.health: - wait_for_status: green + wait_for_status: yellow - is_true: cluster_name - is_false: timed_out @@ -22,5 +23,4 @@ - gt: { active_shards: 0 } - gte: { relocating_shards: 0 } - match: { initializing_shards: 0 } - - match: { unassigned_shards: 0 } - gte: { number_of_pending_tasks: 0 } From 2b6c5762f4fee1e42fa6b75e3710d75086cd65b7 Mon Sep 17 00:00:00 2001 From: David Pilato Date: Fri, 14 Aug 2015 16:21:55 +0200 Subject: [PATCH 21/39] [build] display ignored artifact when checking licenses --- .../src/main/resources/license-check/check_license_and_sha.pl | 1 + 1 file changed, 1 insertion(+) diff --git a/dev-tools/src/main/resources/license-check/check_license_and_sha.pl b/dev-tools/src/main/resources/license-check/check_license_and_sha.pl index 9263244dd2b..4d9d5ba06b8 100755 --- a/dev-tools/src/main/resources/license-check/check_license_and_sha.pl +++ b/dev-tools/src/main/resources/license-check/check_license_and_sha.pl @@ -30,6 +30,7 @@ $Source = File::Spec->rel2abs($Source); say "LICENSE DIR: $License_Dir"; say "SOURCE: $Source"; +say "IGNORE: $Ignore"; die "License dir is not a directory: $License_Dir\n" . usage() unless -d $License_Dir; From 0bb9593596aabdacb4d4b3dafa391307815bb6e2 Mon Sep 17 00:00:00 2001 From: David Pilato Date: Fri, 14 Aug 2015 16:22:46 +0200 Subject: [PATCH 22/39] Fix a typo in comment --- core/src/main/java/org/elasticsearch/plugins/PluginManager.java | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/core/src/main/java/org/elasticsearch/plugins/PluginManager.java b/core/src/main/java/org/elasticsearch/plugins/PluginManager.java index a692504664d..9553ac7a0e3 100644 --- a/core/src/main/java/org/elasticsearch/plugins/PluginManager.java +++ b/core/src/main/java/org/elasticsearch/plugins/PluginManager.java @@ -453,7 +453,7 @@ public class PluginManager { List urls() { List urls = new ArrayList<>(); if (version != null) { - // Elasticsearch new download service uses groupId org.elasticsearch.plugins from 2.0.0 + // Elasticsearch new download service uses groupId org.elasticsearch.plugin from 2.0.0 if (user == null) { // TODO Update to https if (!Strings.isNullOrEmpty(System.getProperty(PROPERTY_SUPPORT_STAGING_URLS))) { From d21afc8090a91dffb4883cf6e0cae293e4a68e60 Mon Sep 17 00:00:00 2001 From: David Pilato Date: Fri, 14 Aug 2015 16:52:55 +0200 Subject: [PATCH 23/39] [maven] rename artifactIds from `elasticsearch-something` to `something` In plugins, we are using non consistent naming. We use `elasticsearch-cloud-aws` as the artifactId, which generates a jar file called `elasticsearch-cloud-aws-VERSION.jar`. But when you want to install the plugin, you will end up with a shorter name for the plugin `cloud-aws`. ``` bin/plugin install cloud-aws ``` This commit changes that and use consistent names for `artifactId`, so `finalName`. Also changed maven names. --- migrate.sh | 2 +- plugins/analysis-icu/pom.xml | 6 +++--- plugins/analysis-kuromoji/pom.xml | 7 +++---- plugins/analysis-phonetic/pom.xml | 6 +++--- plugins/analysis-smartcn/pom.xml | 6 +++--- plugins/analysis-stempel/pom.xml | 6 +++--- plugins/cloud-aws/pom.xml | 6 +++--- plugins/cloud-azure/pom.xml | 6 +++--- plugins/cloud-gce/pom.xml | 6 +++--- plugins/delete-by-query/pom.xml | 6 +++--- plugins/jvm-example/pom.xml | 6 +++--- plugins/lang-javascript/pom.xml | 6 +++--- plugins/lang-python/pom.xml | 6 +++--- plugins/mapper-size/pom.xml | 6 +++--- plugins/pom.xml | 4 ++-- plugins/site-example/pom.xml | 6 +++--- pom.xml | 1 + qa/smoke-test-plugins/pom.xml | 28 ++++++++++++++-------------- 18 files changed, 60 insertions(+), 60 deletions(-) diff --git a/migrate.sh b/migrate.sh index c8f80ccd828..a47e73c0482 100755 --- a/migrate.sh +++ b/migrate.sh @@ -68,7 +68,7 @@ function migratePlugin() { mkdir -p plugins/$1 git mv -k * plugins/$1 > /dev/null 2>/dev/null git rm .gitignore > /dev/null 2>/dev/null - # echo "### change $1 groupId to org.elasticsearch.plugins" + # echo "### change $1 groupId to org.elasticsearch.plugin" # Change the groupId to avoid conflicts with existing 2.0.0 versions. replaceLine " org.elasticsearch<\/groupId>" " org.elasticsearch.plugin<\/groupId>" "plugins/$1/pom.xml" diff --git a/plugins/analysis-icu/pom.xml b/plugins/analysis-icu/pom.xml index a25f6f350f5..b13f69dbd53 100644 --- a/plugins/analysis-icu/pom.xml +++ b/plugins/analysis-icu/pom.xml @@ -6,12 +6,12 @@ org.elasticsearch.plugin - elasticsearch-plugin + plugins 2.1.0-SNAPSHOT - elasticsearch-analysis-icu - Elasticsearch ICU Analysis plugin + analysis-icu + Plugin: Analysis: ICU The ICU Analysis plugin integrates Lucene ICU module into elasticsearch, adding ICU relates analysis components. diff --git a/plugins/analysis-kuromoji/pom.xml b/plugins/analysis-kuromoji/pom.xml index d6e87ad17d9..71d290fddc1 100644 --- a/plugins/analysis-kuromoji/pom.xml +++ b/plugins/analysis-kuromoji/pom.xml @@ -6,13 +6,12 @@ org.elasticsearch.plugin - elasticsearch-plugin + plugins 2.1.0-SNAPSHOT - elasticsearch-analysis-kuromoji - jar - Elasticsearch Japanese (kuromoji) Analysis plugin + analysis-kuromoji + Plugin: Analysis: Japanese (kuromoji) The Japanese (kuromoji) Analysis plugin integrates Lucene kuromoji analysis module into elasticsearch. diff --git a/plugins/analysis-phonetic/pom.xml b/plugins/analysis-phonetic/pom.xml index ae3b7917872..a5b726ecf7d 100644 --- a/plugins/analysis-phonetic/pom.xml +++ b/plugins/analysis-phonetic/pom.xml @@ -6,12 +6,12 @@ org.elasticsearch.plugin - elasticsearch-plugin + plugins 2.1.0-SNAPSHOT - elasticsearch-analysis-phonetic - Elasticsearch Phonetic Analysis plugin + analysis-phonetic + Plugin: Analysis: Phonetic The Phonetic Analysis plugin integrates phonetic token filter analysis with elasticsearch. diff --git a/plugins/analysis-smartcn/pom.xml b/plugins/analysis-smartcn/pom.xml index a25e787cccb..294dbaa4b7a 100644 --- a/plugins/analysis-smartcn/pom.xml +++ b/plugins/analysis-smartcn/pom.xml @@ -6,12 +6,12 @@ org.elasticsearch.plugin - elasticsearch-plugin + plugins 2.1.0-SNAPSHOT - elasticsearch-analysis-smartcn - Elasticsearch Smart Chinese Analysis plugin + analysis-smartcn + Plugin: Analysis: Smart Chinese (smartcn) Smart Chinese Analysis plugin integrates Lucene Smart Chinese analysis module into elasticsearch. diff --git a/plugins/analysis-stempel/pom.xml b/plugins/analysis-stempel/pom.xml index 7fcce1d14af..139e3dbd908 100644 --- a/plugins/analysis-stempel/pom.xml +++ b/plugins/analysis-stempel/pom.xml @@ -6,12 +6,12 @@ org.elasticsearch.plugin - elasticsearch-plugin + plugins 2.1.0-SNAPSHOT - elasticsearch-analysis-stempel - Elasticsearch Stempel (Polish) Analysis plugin + analysis-stempel + Plugin: Analysis: Polish (stempel) The Stempel (Polish) Analysis plugin integrates Lucene stempel (polish) analysis module into elasticsearch. diff --git a/plugins/cloud-aws/pom.xml b/plugins/cloud-aws/pom.xml index 03d157eadff..157a35077d8 100644 --- a/plugins/cloud-aws/pom.xml +++ b/plugins/cloud-aws/pom.xml @@ -6,12 +6,12 @@ org.elasticsearch.plugin - elasticsearch-plugin + plugins 2.1.0-SNAPSHOT - elasticsearch-cloud-aws - Elasticsearch AWS cloud plugin + cloud-aws + Plugin: Cloud: AWS The Amazon Web Service (AWS) Cloud plugin allows to use AWS API for the unicast discovery mechanism and add S3 repositories. diff --git a/plugins/cloud-azure/pom.xml b/plugins/cloud-azure/pom.xml index c7c83816320..f7fa9dceb2d 100644 --- a/plugins/cloud-azure/pom.xml +++ b/plugins/cloud-azure/pom.xml @@ -17,12 +17,12 @@ governing permissions and limitations under the License. --> org.elasticsearch.plugin - elasticsearch-plugin + plugins 2.1.0-SNAPSHOT - elasticsearch-cloud-azure - Elasticsearch Azure cloud plugin + cloud-azure + Plugin: Cloud: Azure The Azure Cloud plugin allows to use Azure API for the unicast discovery mechanism and add Azure storage repositories. diff --git a/plugins/cloud-gce/pom.xml b/plugins/cloud-gce/pom.xml index 1d62fdb327e..724036aafcd 100644 --- a/plugins/cloud-gce/pom.xml +++ b/plugins/cloud-gce/pom.xml @@ -17,12 +17,12 @@ governing permissions and limitations under the License. --> org.elasticsearch.plugin - elasticsearch-plugin + plugins 2.1.0-SNAPSHOT - elasticsearch-cloud-gce - Elasticsearch Google Compute Engine cloud plugin + cloud-gce + Plugin: Cloud: Google Compute Engine The Google Compute Engine (GCE) Cloud plugin allows to use GCE API for the unicast discovery mechanism. diff --git a/plugins/delete-by-query/pom.xml b/plugins/delete-by-query/pom.xml index d7ea468088e..07d3bb8fbe8 100644 --- a/plugins/delete-by-query/pom.xml +++ b/plugins/delete-by-query/pom.xml @@ -17,12 +17,12 @@ governing permissions and limitations under the License. --> org.elasticsearch.plugin - elasticsearch-plugin + plugins 2.1.0-SNAPSHOT - elasticsearch-delete-by-query - Elasticsearch Delete By Query plugin + delete-by-query + Plugin: Delete By Query The Delete By Query plugin allows to delete documents in Elasticsearch with a single query. diff --git a/plugins/jvm-example/pom.xml b/plugins/jvm-example/pom.xml index 0bfd84b82b9..be67cf54eb1 100644 --- a/plugins/jvm-example/pom.xml +++ b/plugins/jvm-example/pom.xml @@ -6,12 +6,12 @@ org.elasticsearch.plugin - elasticsearch-plugin + plugins 2.1.0-SNAPSHOT - elasticsearch-jvm-example - Elasticsearch example JVM plugin + jvm-example + Plugin: JVM example Demonstrates all the pluggable Java entry points in Elasticsearch diff --git a/plugins/lang-javascript/pom.xml b/plugins/lang-javascript/pom.xml index f283aa73989..eaaa29a34b7 100644 --- a/plugins/lang-javascript/pom.xml +++ b/plugins/lang-javascript/pom.xml @@ -6,12 +6,12 @@ org.elasticsearch.plugin - elasticsearch-plugin + plugins 2.1.0-SNAPSHOT - elasticsearch-lang-javascript - Elasticsearch JavaScript language plugin + lang-javascript + Plugin: Language: JavaScript The JavaScript language plugin allows to have javascript as the language of scripts to execute. diff --git a/plugins/lang-python/pom.xml b/plugins/lang-python/pom.xml index 704aff5379c..7b44f61889b 100644 --- a/plugins/lang-python/pom.xml +++ b/plugins/lang-python/pom.xml @@ -6,12 +6,12 @@ org.elasticsearch.plugin - elasticsearch-plugin + plugins 2.1.0-SNAPSHOT - elasticsearch-lang-python - Elasticsearch Python language plugin + lang-python + Plugin: Language: Python The Python language plugin allows to have python as the language of scripts to execute. diff --git a/plugins/mapper-size/pom.xml b/plugins/mapper-size/pom.xml index 2ec3d475e2b..bd483e1868d 100644 --- a/plugins/mapper-size/pom.xml +++ b/plugins/mapper-size/pom.xml @@ -17,12 +17,12 @@ governing permissions and limitations under the License. --> org.elasticsearch.plugin - elasticsearch-plugin + plugins 2.1.0-SNAPSHOT - elasticsearch-mapper-size - Elasticsearch Mapper size plugin + mapper-size + Plugin: Mapper: Size The Mapper Size plugin allows document to record their uncompressed size at index time. diff --git a/plugins/pom.xml b/plugins/pom.xml index fbf88b089fe..f9a976b4946 100644 --- a/plugins/pom.xml +++ b/plugins/pom.xml @@ -6,10 +6,10 @@ 4.0.0 org.elasticsearch.plugin - elasticsearch-plugin + plugins 2.1.0-SNAPSHOT pom - Elasticsearch Plugin POM + Plugin: Parent POM 2009 A parent project for Elasticsearch plugins diff --git a/plugins/site-example/pom.xml b/plugins/site-example/pom.xml index f346678ffc0..3c25d5ae130 100644 --- a/plugins/site-example/pom.xml +++ b/plugins/site-example/pom.xml @@ -6,12 +6,12 @@ org.elasticsearch.plugin - elasticsearch-plugin + plugins 2.1.0-SNAPSHOT - elasticsearch-site-example - Elasticsearch Example site plugin + site-example + Plugin: Example site Demonstrates how to serve resources via elasticsearch. diff --git a/pom.xml b/pom.xml index 47f6779ef46..787a4b07f0e 100644 --- a/pom.xml +++ b/pom.xml @@ -1216,6 +1216,7 @@ org.eclipse.jdt.ui.text.custom_code_templates= + diff --git a/qa/smoke-test-plugins/pom.xml b/qa/smoke-test-plugins/pom.xml index b56bc35e57b..5fed8b67d6e 100644 --- a/qa/smoke-test-plugins/pom.xml +++ b/qa/smoke-test-plugins/pom.xml @@ -247,7 +247,7 @@ org.elasticsearch.plugin - elasticsearch-analysis-kuromoji + analysis-kuromoji ${elasticsearch.version} zip true @@ -255,7 +255,7 @@ org.elasticsearch.plugin - elasticsearch-analysis-smartcn + analysis-smartcn ${elasticsearch.version} zip true @@ -263,7 +263,7 @@ org.elasticsearch.plugin - elasticsearch-analysis-stempel + analysis-stempel ${elasticsearch.version} zip true @@ -271,7 +271,7 @@ org.elasticsearch.plugin - elasticsearch-analysis-phonetic + analysis-phonetic ${elasticsearch.version} zip true @@ -279,7 +279,7 @@ org.elasticsearch.plugin - elasticsearch-analysis-icu + analysis-icu ${elasticsearch.version} zip true @@ -287,7 +287,7 @@ org.elasticsearch.plugin - elasticsearch-cloud-gce + cloud-gce ${elasticsearch.version} zip true @@ -295,7 +295,7 @@ org.elasticsearch.plugin - elasticsearch-cloud-azure + cloud-azure ${elasticsearch.version} zip true @@ -303,7 +303,7 @@ org.elasticsearch.plugin - elasticsearch-cloud-aws + cloud-aws ${elasticsearch.version} zip true @@ -311,7 +311,7 @@ org.elasticsearch.plugin - elasticsearch-delete-by-query + delete-by-query ${elasticsearch.version} zip true @@ -319,7 +319,7 @@ org.elasticsearch.plugin - elasticsearch-lang-python + lang-python ${elasticsearch.version} zip true @@ -327,7 +327,7 @@ org.elasticsearch.plugin - elasticsearch-lang-javascript + lang-javascript ${elasticsearch.version} zip true @@ -343,7 +343,7 @@ org.elasticsearch.plugin - elasticsearch-mapper-size + mapper-size ${elasticsearch.version} zip true @@ -351,7 +351,7 @@ org.elasticsearch.plugin - elasticsearch-site-example + site-example ${elasticsearch.version} zip true @@ -359,7 +359,7 @@ org.elasticsearch.plugin - elasticsearch-jvm-example + jvm-example ${elasticsearch.version} zip true From b5eb78875f19dad64db37ac072dc40779644b33c Mon Sep 17 00:00:00 2001 From: David Pilato Date: Fri, 14 Aug 2015 17:18:21 +0200 Subject: [PATCH 24/39] [maven] rename maven names / ids for distribution modules --- distribution/deb/pom.xml | 4 ++-- distribution/fully-loaded/pom.xml | 4 ++-- distribution/pom.xml | 4 ++-- distribution/rpm/pom.xml | 4 ++-- distribution/shaded/pom.xml | 4 ++-- distribution/src/main/resources/bin/elasticsearch | 2 +- distribution/tar/pom.xml | 4 ++-- distribution/zip/pom.xml | 4 ++-- 8 files changed, 15 insertions(+), 15 deletions(-) diff --git a/distribution/deb/pom.xml b/distribution/deb/pom.xml index 4f1a4b0f95b..a86fad9513d 100644 --- a/distribution/deb/pom.xml +++ b/distribution/deb/pom.xml @@ -5,13 +5,13 @@ 4.0.0 org.elasticsearch.distribution - elasticsearch-distribution + distributions 2.1.0-SNAPSHOT org.elasticsearch.distribution.deb elasticsearch - Elasticsearch DEB Distribution + Distribution: Deb diff --git a/distribution/rpm/pom.xml b/distribution/rpm/pom.xml index cd8f321689e..488ed97ac04 100644 --- a/distribution/rpm/pom.xml +++ b/distribution/rpm/pom.xml @@ -5,13 +5,13 @@ 4.0.0 org.elasticsearch.distribution - elasticsearch-distribution + distributions 2.1.0-SNAPSHOT org.elasticsearch.distribution.rpm elasticsearch - Elasticsearch RPM Distribution + Distribution: RPM rpm The RPM distribution of Elasticsearch diff --git a/distribution/shaded/pom.xml b/distribution/shaded/pom.xml index a0624745e03..6a4b54f7b18 100644 --- a/distribution/shaded/pom.xml +++ b/distribution/shaded/pom.xml @@ -5,13 +5,13 @@ 4.0.0 org.elasticsearch.distribution - elasticsearch-distribution + distributions 2.1.0-SNAPSHOT org.elasticsearch.distribution.shaded elasticsearch - Elasticsearch Shaded Distribution + Distribution: Shaded JAR diff --git a/distribution/src/main/resources/bin/elasticsearch b/distribution/src/main/resources/bin/elasticsearch index f35e2d29a1e..2d831485a33 100755 --- a/distribution/src/main/resources/bin/elasticsearch +++ b/distribution/src/main/resources/bin/elasticsearch @@ -47,7 +47,7 @@ # hasn't been done, we assume that this is not a packaged version and the # user has forgotten to run Maven to create a package. IS_PACKAGED_VERSION='${project.parent.artifactId}' -if [ "$IS_PACKAGED_VERSION" != "elasticsearch-distribution" ]; then +if [ "$IS_PACKAGED_VERSION" != "distributions" ]; then cat >&2 << EOF Error: You must build the project with Maven or download a pre-built package before you can run Elasticsearch. See 'Building from Source' in README.textile diff --git a/distribution/tar/pom.xml b/distribution/tar/pom.xml index d84450b1d22..33181b281ab 100644 --- a/distribution/tar/pom.xml +++ b/distribution/tar/pom.xml @@ -5,13 +5,13 @@ 4.0.0 org.elasticsearch.distribution - elasticsearch-distribution + distributions 2.1.0-SNAPSHOT org.elasticsearch.distribution.tar elasticsearch - Elasticsearch TAR Distribution + Distribution: TAR diff --git a/rest-api-spec/pom.xml b/rest-api-spec/pom.xml index 32f231a0252..4adc4600aa7 100644 --- a/rest-api-spec/pom.xml +++ b/rest-api-spec/pom.xml @@ -1,7 +1,7 @@ 4.0.0 org.elasticsearch - elasticsearch-rest-api-spec + rest-api-spec 2.1.0-SNAPSHOT Rest API Specification REST API Specification and tests for use with the Elasticsearch REST Test framework From 692cc80523c94677ad2f987a1959b21765b26922 Mon Sep 17 00:00:00 2001 From: David Pilato Date: Tue, 18 Aug 2015 11:02:23 +0200 Subject: [PATCH 30/39] [maven] also rename parent project artifactId Also fixed bad scm links --- core/pom.xml | 2 +- distribution/pom.xml | 2 +- plugins/pom.xml | 2 +- pom.xml | 8 ++++---- qa/pom.xml | 2 +- 5 files changed, 8 insertions(+), 8 deletions(-) diff --git a/core/pom.xml b/core/pom.xml index 7de340bf3a7..5c0dd2555c6 100644 --- a/core/pom.xml +++ b/core/pom.xml @@ -5,7 +5,7 @@ 4.0.0 org.elasticsearch - elasticsearch-parent + parent 2.1.0-SNAPSHOT diff --git a/distribution/pom.xml b/distribution/pom.xml index 7e8c52e1caa..52969ee779c 100644 --- a/distribution/pom.xml +++ b/distribution/pom.xml @@ -5,7 +5,7 @@ 4.0.0 org.elasticsearch - elasticsearch-parent + parent 2.1.0-SNAPSHOT diff --git a/plugins/pom.xml b/plugins/pom.xml index f9a976b4946..e87ef1e0678 100644 --- a/plugins/pom.xml +++ b/plugins/pom.xml @@ -15,7 +15,7 @@ org.elasticsearch - elasticsearch-parent + parent 2.1.0-SNAPSHOT diff --git a/pom.xml b/pom.xml index 7c6fa5840e0..71129a6689c 100644 --- a/pom.xml +++ b/pom.xml @@ -5,7 +5,7 @@ 4.0.0 org.elasticsearch - elasticsearch-parent + parent 2.1.0-SNAPSHOT pom Elasticsearch: Parent POM @@ -27,9 +27,9 @@ - scm:git:git@github.com:elasticsearch/elasticsearch-parent.git - scm:git:git@github.com:elasticsearch/elasticsearch-parent.git - http://github.com/elasticsearch/elasticsearch-parent + scm:git:git@github.com:elastic/elasticsearch.git + scm:git:git@github.com:elastic/elasticsearch.git + http://github.com/elastic/elasticsearch diff --git a/qa/pom.xml b/qa/pom.xml index e303ee27c32..d918de924ae 100644 --- a/qa/pom.xml +++ b/qa/pom.xml @@ -14,7 +14,7 @@ org.elasticsearch - elasticsearch-parent + parent 2.1.0-SNAPSHOT From 807d35e96f21c71c4a4419cd2abc83f881a9fcfc Mon Sep 17 00:00:00 2001 From: David Pilato Date: Tue, 18 Aug 2015 13:43:10 +0200 Subject: [PATCH 31/39] [maven] change murmur3 plugin groupId and name --- plugins/mapper-murmur3/pom.xml | 6 +++--- qa/smoke-test-plugins/pom.xml | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/plugins/mapper-murmur3/pom.xml b/plugins/mapper-murmur3/pom.xml index 9d4c7db0000..9c7440d9e72 100644 --- a/plugins/mapper-murmur3/pom.xml +++ b/plugins/mapper-murmur3/pom.xml @@ -17,12 +17,12 @@ governing permissions and limitations under the License. --> org.elasticsearch.plugin - elasticsearch-plugin + plugins 2.1.0-SNAPSHOT - elasticsearch-mapper-murmur3 - Elasticsearch Mapper Murmur3 plugin + mapper-murmur3 + Plugin: Mapper: Murmur3 The Mapper Murmur3 plugin allows to compute hashes of a field's values at index-time and to store them in the index. diff --git a/qa/smoke-test-plugins/pom.xml b/qa/smoke-test-plugins/pom.xml index 5fed8b67d6e..8a08bcd619b 100644 --- a/qa/smoke-test-plugins/pom.xml +++ b/qa/smoke-test-plugins/pom.xml @@ -335,7 +335,7 @@ org.elasticsearch.plugin - elasticsearch-mapper-murmur3 + mapper-murmur3 ${elasticsearch.version} zip true From 4a3ea799ec6ebabe0b97fa5aa29b5589b3fb26bc Mon Sep 17 00:00:00 2001 From: David Pilato Date: Tue, 18 Aug 2015 14:36:12 +0200 Subject: [PATCH 32/39] [qa] multinode tests fails when you run low on disk space (85%) In #12853 we actually introduced a test regression. Now as we wait for yellow instead of green, we might have some pending tasks. This commit simplify all that and only checks the number of nodes within the cluster. --- .../rest-api-spec/test/smoke_test_multinode/10_basic.yaml | 5 ----- 1 file changed, 5 deletions(-) diff --git a/qa/smoke-test-multinode/rest-api-spec/test/smoke_test_multinode/10_basic.yaml b/qa/smoke-test-multinode/rest-api-spec/test/smoke_test_multinode/10_basic.yaml index c6b50b6d38a..f365a61ebdf 100644 --- a/qa/smoke-test-multinode/rest-api-spec/test/smoke_test_multinode/10_basic.yaml +++ b/qa/smoke-test-multinode/rest-api-spec/test/smoke_test_multinode/10_basic.yaml @@ -19,8 +19,3 @@ - is_false: timed_out - gte: { number_of_nodes: 2 } - gte: { number_of_data_nodes: 2 } - - gt: { active_primary_shards: 0 } - - gt: { active_shards: 0 } - - gte: { relocating_shards: 0 } - - match: { initializing_shards: 0 } - - gte: { number_of_pending_tasks: 0 } From 8e93ac5d5c054b0e580059331d05b6f28b22dde0 Mon Sep 17 00:00:00 2001 From: javanna Date: Fri, 14 Aug 2015 18:13:48 +0200 Subject: [PATCH 33/39] Java api: remove execution from TermsQueryBuilder as it has no effect Also introduced ParseField for execution in TermsQueryParser so proper deprecation warnings get printed out when requested. Closes #12884 --- .../index/query/TermsQueryBuilder.java | 17 ------------- .../index/query/TermsQueryParser.java | 6 ++--- .../template/SimpleIndexTemplateIT.java | 2 +- .../search/query/SearchQueryIT.java | 24 +++++++++---------- .../migration/migrate_2_0/java.asciidoc | 4 ++++ 5 files changed, 19 insertions(+), 34 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/index/query/TermsQueryBuilder.java b/core/src/main/java/org/elasticsearch/index/query/TermsQueryBuilder.java index 9ffdb0c647e..ca54eb3b3d3 100644 --- a/core/src/main/java/org/elasticsearch/index/query/TermsQueryBuilder.java +++ b/core/src/main/java/org/elasticsearch/index/query/TermsQueryBuilder.java @@ -38,8 +38,6 @@ public class TermsQueryBuilder extends QueryBuilder implements BoostableQueryBui private String queryName; - private String execution; - private float boost = -1; /** @@ -118,17 +116,6 @@ public class TermsQueryBuilder extends QueryBuilder implements BoostableQueryBui this.values = values; } - /** - * Sets the execution mode for the terms filter. Cane be either "plain", "bool" - * "and". Defaults to "plain". - * @deprecated elasticsearch now makes better decisions on its own - */ - @Deprecated - public TermsQueryBuilder execution(String execution) { - this.execution = execution; - return this; - } - /** * Sets the minimum number of matches across the provided terms. Defaults to 1. * @deprecated use [bool] query instead @@ -168,10 +155,6 @@ public class TermsQueryBuilder extends QueryBuilder implements BoostableQueryBui builder.startObject(TermsQueryParser.NAME); builder.field(name, values); - if (execution != null) { - builder.field("execution", execution); - } - if (minimumShouldMatch != null) { builder.field("minimum_should_match", minimumShouldMatch); } diff --git a/core/src/main/java/org/elasticsearch/index/query/TermsQueryParser.java b/core/src/main/java/org/elasticsearch/index/query/TermsQueryParser.java index fa643892e45..db2fed37226 100644 --- a/core/src/main/java/org/elasticsearch/index/query/TermsQueryParser.java +++ b/core/src/main/java/org/elasticsearch/index/query/TermsQueryParser.java @@ -52,11 +52,9 @@ public class TermsQueryParser implements QueryParser { public static final String NAME = "terms"; private static final ParseField MIN_SHOULD_MATCH_FIELD = new ParseField("min_match", "min_should_match").withAllDeprecated("Use [bool] query instead"); private static final ParseField DISABLE_COORD_FIELD = new ParseField("disable_coord").withAllDeprecated("Use [bool] query instead"); + private static final ParseField EXECUTION_FIELD = new ParseField("execution").withAllDeprecated("execution is deprecated and has no effect"); private Client client; - @Deprecated - public static final String EXECUTION_KEY = "execution"; - @Inject public TermsQueryParser() { } @@ -141,7 +139,7 @@ public class TermsQueryParser implements QueryParser { throw new QueryParsingException(parseContext, "[terms] query lookup element requires specifying the path"); } } else if (token.isValue()) { - if (EXECUTION_KEY.equals(currentFieldName)) { + if (parseContext.parseFieldMatcher().match(currentFieldName, EXECUTION_FIELD)) { // ignore } else if (parseContext.parseFieldMatcher().match(currentFieldName, MIN_SHOULD_MATCH_FIELD)) { if (minShouldMatch != null) { diff --git a/core/src/test/java/org/elasticsearch/indices/template/SimpleIndexTemplateIT.java b/core/src/test/java/org/elasticsearch/indices/template/SimpleIndexTemplateIT.java index 1cbe5eb56dc..b8185250761 100644 --- a/core/src/test/java/org/elasticsearch/indices/template/SimpleIndexTemplateIT.java +++ b/core/src/test/java/org/elasticsearch/indices/template/SimpleIndexTemplateIT.java @@ -344,7 +344,7 @@ public class SimpleIndexTemplateIT extends ESIntegTestCase { .addAlias(new Alias("templated_alias-{index}")) .addAlias(new Alias("filtered_alias").filter("{\"type\":{\"value\":\"type2\"}}")) .addAlias(new Alias("complex_filtered_alias") - .filter(QueryBuilders.termsQuery("_type", "typeX", "typeY", "typeZ").execution("bool"))) + .filter(QueryBuilders.termsQuery("_type", "typeX", "typeY", "typeZ"))) .get(); assertAcked(prepareCreate("test_index").addMapping("type1").addMapping("type2").addMapping("typeX").addMapping("typeY").addMapping("typeZ")); diff --git a/core/src/test/java/org/elasticsearch/search/query/SearchQueryIT.java b/core/src/test/java/org/elasticsearch/search/query/SearchQueryIT.java index bca495bf31a..07363a4f240 100644 --- a/core/src/test/java/org/elasticsearch/search/query/SearchQueryIT.java +++ b/core/src/test/java/org/elasticsearch/search/query/SearchQueryIT.java @@ -1165,7 +1165,7 @@ public class SearchQueryIT extends ESIntegTestCase { } @Test - public void testFieldDatatermsQuery() throws Exception { + public void testTermsQuery() throws Exception { assertAcked(prepareCreate("test").addMapping("type", "str", "type=string", "lng", "type=long", "dbl", "type=double")); indexRandom(true, @@ -1175,60 +1175,60 @@ public class SearchQueryIT extends ESIntegTestCase { client().prepareIndex("test", "type", "4").setSource("str", "4", "lng", 4l, "dbl", 4.0d)); SearchResponse searchResponse = client().prepareSearch("test") - .setQuery(filteredQuery(matchAllQuery(), termsQuery("str", "1", "4").execution("fielddata"))).get(); + .setQuery(filteredQuery(matchAllQuery(), termsQuery("str", "1", "4"))).get(); assertHitCount(searchResponse, 2l); assertSearchHits(searchResponse, "1", "4"); searchResponse = client().prepareSearch("test") - .setQuery(filteredQuery(matchAllQuery(), termsQuery("lng", new long[] {2, 3}).execution("fielddata"))).get(); + .setQuery(filteredQuery(matchAllQuery(), termsQuery("lng", new long[] {2, 3}))).get(); assertHitCount(searchResponse, 2l); assertSearchHits(searchResponse, "2", "3"); searchResponse = client().prepareSearch("test") - .setQuery(filteredQuery(matchAllQuery(), termsQuery("dbl", new double[]{2, 3}).execution("fielddata"))).get(); + .setQuery(filteredQuery(matchAllQuery(), termsQuery("dbl", new double[]{2, 3}))).get(); assertHitCount(searchResponse, 2l); assertSearchHits(searchResponse, "2", "3"); searchResponse = client().prepareSearch("test") - .setQuery(filteredQuery(matchAllQuery(), termsQuery("lng", new int[] {1, 3}).execution("fielddata"))).get(); + .setQuery(filteredQuery(matchAllQuery(), termsQuery("lng", new int[] {1, 3}))).get(); assertHitCount(searchResponse, 2l); assertSearchHits(searchResponse, "1", "3"); searchResponse = client().prepareSearch("test") - .setQuery(filteredQuery(matchAllQuery(), termsQuery("dbl", new float[] {2, 4}).execution("fielddata"))).get(); + .setQuery(filteredQuery(matchAllQuery(), termsQuery("dbl", new float[] {2, 4}))).get(); assertHitCount(searchResponse, 2l); assertSearchHits(searchResponse, "2", "4"); // test partial matching searchResponse = client().prepareSearch("test") - .setQuery(filteredQuery(matchAllQuery(), termsQuery("str", "2", "5").execution("fielddata"))).get(); + .setQuery(filteredQuery(matchAllQuery(), termsQuery("str", "2", "5"))).get(); assertNoFailures(searchResponse); assertHitCount(searchResponse, 1l); assertFirstHit(searchResponse, hasId("2")); searchResponse = client().prepareSearch("test") - .setQuery(filteredQuery(matchAllQuery(), termsQuery("dbl", new double[] {2, 5}).execution("fielddata"))).get(); + .setQuery(filteredQuery(matchAllQuery(), termsQuery("dbl", new double[] {2, 5}))).get(); assertNoFailures(searchResponse); assertHitCount(searchResponse, 1l); assertFirstHit(searchResponse, hasId("2")); searchResponse = client().prepareSearch("test") - .setQuery(filteredQuery(matchAllQuery(), termsQuery("lng", new long[] {2, 5}).execution("fielddata"))).get(); + .setQuery(filteredQuery(matchAllQuery(), termsQuery("lng", new long[] {2, 5}))).get(); assertNoFailures(searchResponse); assertHitCount(searchResponse, 1l); assertFirstHit(searchResponse, hasId("2")); // test valid type, but no matching terms searchResponse = client().prepareSearch("test") - .setQuery(filteredQuery(matchAllQuery(), termsQuery("str", "5", "6").execution("fielddata"))).get(); + .setQuery(filteredQuery(matchAllQuery(), termsQuery("str", "5", "6"))).get(); assertHitCount(searchResponse, 0l); searchResponse = client().prepareSearch("test") - .setQuery(filteredQuery(matchAllQuery(), termsQuery("dbl", new double[] {5, 6}).execution("fielddata"))).get(); + .setQuery(filteredQuery(matchAllQuery(), termsQuery("dbl", new double[] {5, 6}))).get(); assertHitCount(searchResponse, 0l); searchResponse = client().prepareSearch("test") - .setQuery(filteredQuery(matchAllQuery(), termsQuery("lng", new long[] {5, 6}).execution("fielddata"))).get(); + .setQuery(filteredQuery(matchAllQuery(), termsQuery("lng", new long[] {5, 6}))).get(); assertHitCount(searchResponse, 0l); } diff --git a/docs/reference/migration/migrate_2_0/java.asciidoc b/docs/reference/migration/migrate_2_0/java.asciidoc index 5f2d2f4834e..d5e9acf9a0a 100644 --- a/docs/reference/migration/migrate_2_0/java.asciidoc +++ b/docs/reference/migration/migrate_2_0/java.asciidoc @@ -74,3 +74,7 @@ The following deprecated methods have been removed: The redundant BytesQueryBuilder has been removed in favour of the WrapperQueryBuilder internally. +==== TermsQueryBuilder execution removed + +The `TermsQueryBuilder#execution` method has been removed as it has no effect, it is ignored by the + corresponding parser. From 501a1996a38a50dd5460f4a3a18ee5ea959b2331 Mon Sep 17 00:00:00 2001 From: javanna Date: Fri, 14 Aug 2015 18:46:19 +0200 Subject: [PATCH 34/39] Query DSL: remove attemped (not working) support for array in not query parser Closes #12890 --- .../java/org/elasticsearch/index/query/NotQueryParser.java | 4 ---- 1 file changed, 4 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/index/query/NotQueryParser.java b/core/src/main/java/org/elasticsearch/index/query/NotQueryParser.java index 68ffe435d77..6bfe4c7843a 100644 --- a/core/src/main/java/org/elasticsearch/index/query/NotQueryParser.java +++ b/core/src/main/java/org/elasticsearch/index/query/NotQueryParser.java @@ -68,10 +68,6 @@ public class NotQueryParser implements QueryParser { // its the filter, and the name is the field query = parseContext.parseInnerFilter(currentFieldName); } - } else if (token == XContentParser.Token.START_ARRAY) { - queryFound = true; - // its the filter, and the name is the field - query = parseContext.parseInnerFilter(currentFieldName); } else if (token.isValue()) { if ("_name".equals(currentFieldName)) { queryName = parser.text(); From 1e511eda28e2c91032ce1e377aaddf9b8f6030c0 Mon Sep 17 00:00:00 2001 From: Simon Willnauer Date: Tue, 18 Aug 2015 15:44:14 +0200 Subject: [PATCH 35/39] Remove usage or `InetAddress#getLocalHost` this method is very confusing and if it's used it's likely the wrong thing with respect to the actual bound / published address. This change discourages it's use and removes all useage. It's replaced with the actual published address most of the time. --- .../cluster/node/DiscoveryNode.java | 2 +- .../cluster/service/InternalClusterService.java | 5 ++++- .../common/network/NetworkUtils.java | 11 ----------- .../common/transport/DummyTransportAddress.java | 15 +++++++++++++++ .../transport/InetSocketTransportAddress.java | 17 ++++++++++++++++- .../common/transport/LocalTransportAddress.java | 17 ++++++++++++++++- .../common/transport/TransportAddress.java | 17 +++++++++++++++++ .../allocation/decider/MockDiskUsagesIT.java | 3 ++- .../elasticsearch/test/InternalTestCluster.java | 2 +- .../main/resources/forbidden/all-signatures.txt | 3 +++ 10 files changed, 75 insertions(+), 17 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNode.java b/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNode.java index 87fe5017fa6..cc5ff81e8d8 100644 --- a/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNode.java +++ b/core/src/main/java/org/elasticsearch/cluster/node/DiscoveryNode.java @@ -138,7 +138,7 @@ public class DiscoveryNode implements Streamable, ToXContent { * @param version the version of the node. */ public DiscoveryNode(String nodeName, String nodeId, TransportAddress address, Map attributes, Version version) { - this(nodeName, nodeId, NetworkUtils.getLocalHost().getHostName(), NetworkUtils.getLocalHost().getHostAddress(), address, attributes, version); + this(nodeName, nodeId, address.getHost(), address.getAddress(), address, attributes, version); } /** diff --git a/core/src/main/java/org/elasticsearch/cluster/service/InternalClusterService.java b/core/src/main/java/org/elasticsearch/cluster/service/InternalClusterService.java index 1fea2edec29..b992c3612ee 100644 --- a/core/src/main/java/org/elasticsearch/cluster/service/InternalClusterService.java +++ b/core/src/main/java/org/elasticsearch/cluster/service/InternalClusterService.java @@ -40,6 +40,8 @@ import org.elasticsearch.common.logging.ESLogger; import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.text.StringText; +import org.elasticsearch.common.transport.BoundTransportAddress; +import org.elasticsearch.common.transport.TransportAddress; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.concurrent.*; import org.elasticsearch.discovery.Discovery; @@ -159,7 +161,8 @@ public class InternalClusterService extends AbstractLifecycleComponent nodeAttributes = discoveryNodeService.buildAttributes(); // note, we rely on the fact that its a new id each time we start, see FD and "kill -9" handling final String nodeId = DiscoveryService.generateNodeId(settings); - DiscoveryNode localNode = new DiscoveryNode(settings.get("name"), nodeId, transportService.boundAddress().publishAddress(), nodeAttributes, version); + final TransportAddress publishAddress = transportService.boundAddress().publishAddress(); + DiscoveryNode localNode = new DiscoveryNode(settings.get("name"), nodeId, publishAddress, nodeAttributes, version); DiscoveryNodes.Builder nodeBuilder = DiscoveryNodes.builder().put(localNode).localNodeId(localNode.id()); this.clusterState = ClusterState.builder(clusterState).nodes(nodeBuilder).blocks(initialBlocks).build(); this.transportService.setLocalNode(localNode); diff --git a/core/src/main/java/org/elasticsearch/common/network/NetworkUtils.java b/core/src/main/java/org/elasticsearch/common/network/NetworkUtils.java index 81bf63dae4f..39705e82905 100644 --- a/core/src/main/java/org/elasticsearch/common/network/NetworkUtils.java +++ b/core/src/main/java/org/elasticsearch/common/network/NetworkUtils.java @@ -127,17 +127,6 @@ public abstract class NetworkUtils { return Constants.WINDOWS ? false : true; } - /** Returns localhost, or if its misconfigured, falls back to loopback. Use with caution!!!! */ - // TODO: can we remove this? - public static InetAddress getLocalHost() { - try { - return InetAddress.getLocalHost(); - } catch (UnknownHostException e) { - logger.warn("failed to resolve local host, fallback to loopback", e); - return InetAddress.getLoopbackAddress(); - } - } - /** Returns addresses for all loopback interfaces that are up. */ public static InetAddress[] getLoopbackAddresses() throws SocketException { List list = new ArrayList<>(); diff --git a/core/src/main/java/org/elasticsearch/common/transport/DummyTransportAddress.java b/core/src/main/java/org/elasticsearch/common/transport/DummyTransportAddress.java index 47f089a1e14..74bcfecdc69 100644 --- a/core/src/main/java/org/elasticsearch/common/transport/DummyTransportAddress.java +++ b/core/src/main/java/org/elasticsearch/common/transport/DummyTransportAddress.java @@ -44,6 +44,21 @@ public class DummyTransportAddress implements TransportAddress { return other == INSTANCE; } + @Override + public String getHost() { + return "dummy"; + } + + @Override + public String getAddress() { + return "0.0.0.0"; // see https://en.wikipedia.org/wiki/0.0.0.0 + } + + @Override + public int getPort() { + return 42; + } + @Override public DummyTransportAddress readFrom(StreamInput in) throws IOException { return INSTANCE; diff --git a/core/src/main/java/org/elasticsearch/common/transport/InetSocketTransportAddress.java b/core/src/main/java/org/elasticsearch/common/transport/InetSocketTransportAddress.java index a13e24f3c3b..f4f686ff2e5 100644 --- a/core/src/main/java/org/elasticsearch/common/transport/InetSocketTransportAddress.java +++ b/core/src/main/java/org/elasticsearch/common/transport/InetSocketTransportAddress.java @@ -30,7 +30,7 @@ import java.net.InetSocketAddress; /** * A transport address used for IP socket address (wraps {@link java.net.InetSocketAddress}). */ -public class InetSocketTransportAddress implements TransportAddress { +public final class InetSocketTransportAddress implements TransportAddress { private static boolean resolveAddress = false; @@ -92,6 +92,21 @@ public class InetSocketTransportAddress implements TransportAddress { address.getAddress().equals(((InetSocketTransportAddress) other).address.getAddress()); } + @Override + public String getHost() { + return address.getHostName(); + } + + @Override + public String getAddress() { + return address.getAddress().getHostAddress(); + } + + @Override + public int getPort() { + return address.getPort(); + } + public InetSocketAddress address() { return this.address; } diff --git a/core/src/main/java/org/elasticsearch/common/transport/LocalTransportAddress.java b/core/src/main/java/org/elasticsearch/common/transport/LocalTransportAddress.java index 8935275e222..e3efa20af18 100644 --- a/core/src/main/java/org/elasticsearch/common/transport/LocalTransportAddress.java +++ b/core/src/main/java/org/elasticsearch/common/transport/LocalTransportAddress.java @@ -29,7 +29,7 @@ import java.io.IOException; /** * */ -public class LocalTransportAddress implements TransportAddress { +public final class LocalTransportAddress implements TransportAddress { public static final LocalTransportAddress PROTO = new LocalTransportAddress("_na"); @@ -57,6 +57,21 @@ public class LocalTransportAddress implements TransportAddress { return other instanceof LocalTransportAddress && id.equals(((LocalTransportAddress) other).id); } + @Override + public String getHost() { + return "local"; + } + + @Override + public String getAddress() { + return "0.0.0.0"; // see https://en.wikipedia.org/wiki/0.0.0.0 + } + + @Override + public int getPort() { + return 0; + } + @Override public LocalTransportAddress readFrom(StreamInput in) throws IOException { return new LocalTransportAddress(in); diff --git a/core/src/main/java/org/elasticsearch/common/transport/TransportAddress.java b/core/src/main/java/org/elasticsearch/common/transport/TransportAddress.java index c5051fadbe6..910b1fc6af2 100644 --- a/core/src/main/java/org/elasticsearch/common/transport/TransportAddress.java +++ b/core/src/main/java/org/elasticsearch/common/transport/TransportAddress.java @@ -28,7 +28,24 @@ import org.elasticsearch.common.io.stream.Writeable; */ public interface TransportAddress extends Writeable { + /** + * Returns the host string for this transport address + */ + String getHost(); + + /** + * Returns the address string for this transport address + */ + String getAddress(); + + /** + * Returns the port of this transport address if applicable + */ + int getPort(); + short uniqueAddressTypeId(); boolean sameHost(TransportAddress other); + + public String toString(); } diff --git a/core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/MockDiskUsagesIT.java b/core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/MockDiskUsagesIT.java index 79612d07b0e..b0ae30a06d4 100644 --- a/core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/MockDiskUsagesIT.java +++ b/core/src/test/java/org/elasticsearch/cluster/routing/allocation/decider/MockDiskUsagesIT.java @@ -27,6 +27,7 @@ import org.elasticsearch.cluster.*; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.routing.RoutingNode; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.transport.DummyTransportAddress; import org.elasticsearch.monitor.fs.FsInfo; import org.elasticsearch.test.ESIntegTestCase; import org.junit.Test; @@ -167,7 +168,7 @@ public class MockDiskUsagesIT extends ESIntegTestCase { usage.getTotalBytes(), usage.getFreeBytes(), usage.getFreeBytes()); paths[0] = path; FsInfo fsInfo = new FsInfo(System.currentTimeMillis(), paths); - return new NodeStats(new DiscoveryNode(nodeName, null, Version.V_2_0_0_beta1), + return new NodeStats(new DiscoveryNode(nodeName, DummyTransportAddress.INSTANCE, Version.CURRENT), System.currentTimeMillis(), null, null, null, null, null, fsInfo, diff --git a/core/src/test/java/org/elasticsearch/test/InternalTestCluster.java b/core/src/test/java/org/elasticsearch/test/InternalTestCluster.java index 9eaab6d8b4b..edf133e3fe7 100644 --- a/core/src/test/java/org/elasticsearch/test/InternalTestCluster.java +++ b/core/src/test/java/org/elasticsearch/test/InternalTestCluster.java @@ -108,6 +108,7 @@ import org.junit.Assert; import java.io.Closeable; import java.io.IOException; +import java.net.InetAddress; import java.net.InetSocketAddress; import java.nio.file.Path; import java.util.ArrayList; @@ -504,7 +505,6 @@ public final class InternalTestCluster extends TestCluster { public static String clusterName(String prefix, long clusterSeed) { StringBuilder builder = new StringBuilder(prefix); final int childVM = RandomizedTest.systemPropertyAsInt(SysGlobals.CHILDVM_SYSPROP_JVM_ID, 0); - builder.append('-').append(NetworkUtils.getLocalHost().getHostName()); builder.append("-CHILD_VM=[").append(childVM).append(']'); builder.append("-CLUSTER_SEED=[").append(clusterSeed).append(']'); // if multiple maven task run on a single host we better have an identifier that doesn't rely on input params diff --git a/dev-tools/src/main/resources/forbidden/all-signatures.txt b/dev-tools/src/main/resources/forbidden/all-signatures.txt index 642310519c8..f697b323569 100644 --- a/dev-tools/src/main/resources/forbidden/all-signatures.txt +++ b/dev-tools/src/main/resources/forbidden/all-signatures.txt @@ -18,6 +18,9 @@ java.net.URL#getPath() java.net.URL#getFile() +@defaultMessage Usage of getLocalHost is discouraged +java.net.InetAddress#getLocalHost() + @defaultMessage Use java.nio.file instead of java.io.File API java.util.jar.JarFile java.util.zip.ZipFile From 60f273c891230b06b12a8d44f1432b931b14d1d6 Mon Sep 17 00:00:00 2001 From: Simon Willnauer Date: Tue, 18 Aug 2015 17:32:20 +0200 Subject: [PATCH 36/39] Suppress forbiddenAPI in logger when using localhost --- .../elasticsearch/common/logging/Loggers.java | 24 ++++++++++++------- 1 file changed, 16 insertions(+), 8 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/common/logging/Loggers.java b/core/src/main/java/org/elasticsearch/common/logging/Loggers.java index 8f546c93e89..de657c07be2 100644 --- a/core/src/main/java/org/elasticsearch/common/logging/Loggers.java +++ b/core/src/main/java/org/elasticsearch/common/logging/Loggers.java @@ -20,6 +20,7 @@ package org.elasticsearch.common.logging; import com.google.common.collect.Lists; +import org.apache.lucene.util.SuppressForbidden; import org.elasticsearch.common.Classes; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.Index; @@ -74,20 +75,27 @@ public class Loggers { return getLogger(buildClassLoggerName(clazz), settings, prefixes); } + @SuppressForbidden(reason = "using localhost for logging on which host it is is fine") + private static InetAddress getHostAddress() { + try { + return InetAddress.getLocalHost(); + } catch (UnknownHostException e) { + return null; + } + } + public static ESLogger getLogger(String loggerName, Settings settings, String... prefixes) { List prefixesList = newArrayList(); if (settings.getAsBoolean("logger.logHostAddress", false)) { - try { - prefixesList.add(InetAddress.getLocalHost().getHostAddress()); - } catch (UnknownHostException e) { - // ignore + final InetAddress addr = getHostAddress(); + if (addr != null) { + prefixesList.add(addr.getHostAddress()); } } if (settings.getAsBoolean("logger.logHostName", false)) { - try { - prefixesList.add(InetAddress.getLocalHost().getHostName()); - } catch (UnknownHostException e) { - // ignore + final InetAddress addr = getHostAddress(); + if (addr != null) { + prefixesList.add(addr.getHostName()); } } String name = settings.get("name"); From 4c5cfd02cc25c08d9567653a06f5a97ccf08f21d Mon Sep 17 00:00:00 2001 From: Ryan Ernst Date: Tue, 18 Aug 2015 10:13:01 -0700 Subject: [PATCH 37/39] Add javadocs to repository types registry methods --- .../elasticsearch/repositories/RepositoryTypesRegistry.java | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/core/src/main/java/org/elasticsearch/repositories/RepositoryTypesRegistry.java b/core/src/main/java/org/elasticsearch/repositories/RepositoryTypesRegistry.java index b1e3041e746..e66c79da06f 100644 --- a/core/src/main/java/org/elasticsearch/repositories/RepositoryTypesRegistry.java +++ b/core/src/main/java/org/elasticsearch/repositories/RepositoryTypesRegistry.java @@ -34,11 +34,16 @@ public class RepositoryTypesRegistry { private final ExtensionPoint.TypeExtensionPoint shardRepositoryTypes = new ExtensionPoint.TypeExtensionPoint<>("index_repository", IndexShardRepository.class); + /** Adds a new repository type to the registry, bound to the given implementation classes. */ public void registerRepository(String name, Class repositoryType, Class shardRepositoryType) { repositoryTypes.registerExtension(name, repositoryType); shardRepositoryTypes.registerExtension(name, shardRepositoryType); } + /** + * Looks up the given type and binds the implementation into the given binder. + * Throws an {@link IllegalArgumentException} if the given type does not exist. + */ public void bindType(Binder binder, String type) { Settings settings = Settings.builder().put("type", type).build(); repositoryTypes.bindType(binder, settings, "type", null); From 36f48ff32f0d91e43400476895d029aa630e95d2 Mon Sep 17 00:00:00 2001 From: Clinton Gormley Date: Tue, 18 Aug 2015 19:13:54 +0200 Subject: [PATCH 38/39] Docs: Added removal of MVEL to migration docs --- docs/reference/migration/migrate_2_0/removals.asciidoc | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/docs/reference/migration/migrate_2_0/removals.asciidoc b/docs/reference/migration/migrate_2_0/removals.asciidoc index ab764691042..afdc109244c 100644 --- a/docs/reference/migration/migrate_2_0/removals.asciidoc +++ b/docs/reference/migration/migrate_2_0/removals.asciidoc @@ -16,6 +16,11 @@ Facets, deprecated since 1.0, have now been removed. Instead, use the much more powerful and flexible <> framework. This also means that Kibana 3 will not work with Elasticsearch 2.0. +==== MVEL has been removed + +The MVEL scripting language has been removed. The default scripting language +is now Groovy. + ==== Delete-by-query is now a plugin The old delete-by-query functionality was fast but unsafe. It could lead to From a3afe5779203aaaff50384ded3aa2369f47a79f6 Mon Sep 17 00:00:00 2001 From: Ryan Ernst Date: Tue, 18 Aug 2015 10:32:12 -0700 Subject: [PATCH 39/39] Fix compile failure from bad merge after renaming of ExtensionPoint.TypeExtensionPoint --- .../repositories/RepositoryTypesRegistry.java | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/core/src/main/java/org/elasticsearch/repositories/RepositoryTypesRegistry.java b/core/src/main/java/org/elasticsearch/repositories/RepositoryTypesRegistry.java index e66c79da06f..d2f02aa5795 100644 --- a/core/src/main/java/org/elasticsearch/repositories/RepositoryTypesRegistry.java +++ b/core/src/main/java/org/elasticsearch/repositories/RepositoryTypesRegistry.java @@ -29,10 +29,10 @@ import org.elasticsearch.index.snapshots.IndexShardRepository; */ public class RepositoryTypesRegistry { // invariant: repositories and shardRepositories have the same keyset - private final ExtensionPoint.TypeExtensionPoint repositoryTypes = - new ExtensionPoint.TypeExtensionPoint<>("repository", Repository.class); - private final ExtensionPoint.TypeExtensionPoint shardRepositoryTypes = - new ExtensionPoint.TypeExtensionPoint<>("index_repository", IndexShardRepository.class); + private final ExtensionPoint.SelectedType repositoryTypes = + new ExtensionPoint.SelectedType<>("repository", Repository.class); + private final ExtensionPoint.SelectedType shardRepositoryTypes = + new ExtensionPoint.SelectedType<>("index_repository", IndexShardRepository.class); /** Adds a new repository type to the registry, bound to the given implementation classes. */ public void registerRepository(String name, Class repositoryType, Class shardRepositoryType) {