mirror of
https://github.com/honeymoose/OpenSearch.git
synced 2025-02-11 15:35:05 +00:00
Fix unicast discovery to work when a host has multiple addresses. Ban dangerous methods in java.net with forbidden APIs. Fix ipv6 bugs and formatting of network addresses everywhere. Closes #12999 Closes #12993 Squashed commit of the following: commit 6c1aa001d091c5cf25212a53dc701fb704337f1e Author: Robert Muir <rmuir@apache.org> Date: Thu Aug 20 14:25:43 2015 -0400 Fix these to be correct with addresses just in case commit 648215627e84abf58a71400e7dc9ae775efb71d6 Merge: d00561b 41d8fbe Author: Robert Muir <rmuir@apache.org> Date: Thu Aug 20 13:23:09 2015 -0400 Merge branch 'master' into unicast_all_the_way_down commit d00561b76fd1aa5850699f7901f3dae3d4d402b7 Author: Simon Willnauer <simonw@apache.org> Date: Thu Aug 20 16:38:50 2015 +0200 limit local ports to 5 in UnicastZenPing commit e2e15c594006746cbe24432694294a71cc99deb8 Author: Robert Muir <rmuir@apache.org> Date: Thu Aug 20 10:32:47 2015 -0400 fix port limiting commit 10153cb7adadda81a1f482445e703836b65cf5e2 Author: Robert Muir <rmuir@apache.org> Date: Thu Aug 20 10:18:37 2015 -0400 don't serialize scopeids: that's broken commit 2aa63d43db2baec68a2e9bc227cfeb85dfeb4f83 Author: Simon Willnauer <simonw@apache.org> Date: Thu Aug 20 16:06:51 2015 +0200 restore @Network commit c840f1d1ef438826ae1ecfd5e45942a0e30dc9c0 Author: Simon Willnauer <simonw@apache.org> Date: Thu Aug 20 16:02:30 2015 +0200 Use NetworkAddress.formatAddress where applicable in plugins commit 374ce878852b35d626b7a29c8c4773545b0e9ddd Author: Simon Willnauer <simonw@apache.org> Date: Thu Aug 20 15:34:06 2015 +0200 Use NetworkAddress.formatAddress where applicable commit e7a606d63f1bc43c1b62b6e17adf707c76d43a15 Author: Simon Willnauer <simonw@apache.org> Date: Thu Aug 20 10:17:57 2015 +0200 Add @Multicast annotation to disable multicast tests by default. We only run multicast tests now when we explicitly state it. A working multicast env is required which is not always the case. commit 2d7d2d0347179696ab41f71f048b13305014c85b Author: Simon Willnauer <simonw@apache.org> Date: Thu Aug 20 09:51:28 2015 +0200 Remove extra check for local mode in InternalTestCluster commit dda59ac39aa136d4687b9274c2692cd77f8b8f66 Author: Simon Willnauer <simonw@apache.org> Date: Thu Aug 20 09:37:03 2015 +0200 Handle node mode across entire test cluster We used static methods reading sys properties to define the node mode per cluster. this had lots of problems when tests couldn't cope with mixed or only local mode. Now we are passing it down to the cluster from the test which allows to @SuppressNetworkMode / @SupressLocalMode on the test to force consistent node configurations. commit 058197b7a408318995c88ce7f6762e32348de0de Author: Robert Muir <rmuir@apache.org> Date: Thu Aug 20 03:19:14 2015 -0400 really ban InetSocketAddress's trappy method and break build and go to sleep, sorry commit ac8779185aee1e17e6f5a81766290fdfc9c603ba Author: Robert Muir <rmuir@apache.org> Date: Thu Aug 20 03:16:52 2015 -0400 Ban methods that might surprisingly cause DNS lookups commit e64fe3dff2b11503e5f2831eb9863d64f56c5538 Author: Robert Muir <rmuir@apache.org> Date: Thu Aug 20 02:59:05 2015 -0400 Add unit test commit f15434f20fb1a3691b1cc16028597d8fae937e05 Author: Robert Muir <rmuir@apache.org> Date: Thu Aug 20 02:39:02 2015 -0400 fix ipv6 formatting bugs commit 05c2c74098052c75fbb79ea1818a295ef2e03e30 Author: Robert Muir <rmuir@apache.org> Date: Thu Aug 20 02:12:05 2015 -0400 format addresses correctly so I can actually read what comes out of our logs and stats apis commit 4f9389dcf1e8925f23153c5eb271b4ce2294dbaf Author: Robert Muir <rmuir@apache.org> Date: Wed Aug 19 21:26:52 2015 -0400 ban dangerous methods in java.net commit 6aacd4d9925f324903d1d099a6cf5f862aeaf677 Author: Robert Muir <rmuir@apache.org> Date: Wed Aug 19 20:59:24 2015 -0400 ban lenient method commit f466a842c60163d1f4554bdce8a4163edb534c2c Author: Simon Willnauer <simonw@apache.org> Date: Thu Aug 20 00:29:00 2015 +0200 fix tests to not mix local transport and zen unicast disco commit 0de007a33b33fb68cf85cd86db4ca4f8ce10bbc9 Author: Simon Willnauer <simonw@apache.org> Date: Thu Aug 20 00:10:07 2015 +0200 fix tests to not mix local transport and zen unicast disco commit 539f6ca6e5137e0d496239adc8684688dedcc824 Author: Simon Willnauer <simonw@apache.org> Date: Thu Aug 20 00:02:01 2015 +0200 fix tests to not mix local transport and zen unicast disco commit 004c2881b25467f332acc8c9f9e92b1f0f9d314e Author: Robert Muir <rmuir@apache.org> Date: Wed Aug 19 17:51:45 2015 -0400 Fix multinode commit 54113af325ce31571811c49fdaae89d5687be4ba Author: Robert Muir <rmuir@apache.org> Date: Wed Aug 19 17:36:45 2015 -0400 fix integration tests commit 0156a77a56319d6b9737ec6a531992052e50bd59 Author: Simon Willnauer <simonw@apache.org> Date: Wed Aug 19 23:32:18 2015 +0200 enable multicast in MulticastZenPingIT.java commit 1791caa35da853ce0122485fa3fd4674c671ec6e Author: Robert Muir <rmuir@apache.org> Date: Wed Aug 19 17:23:16 2015 -0400 Fix constant commit 22820b53e0b2dc9fd47145c2bc29ce912a8fd484 Author: Simon Willnauer <simonw@apache.org> Date: Wed Aug 19 22:59:09 2015 +0200 give it some extra ids for local transport crazyness commit b2138fafa94a8a085813fd48356df63e57ade5b3 Author: Simon Willnauer <simonw@apache.org> Date: Wed Aug 19 22:51:42 2015 +0200 pass on local addresses from configured transport rather than hard code IP addresses commit 1bf5de1f457b081e0ce262b57d2b55d39c434156 Author: Simon Willnauer <simonw@apache.org> Date: Wed Aug 19 22:04:31 2015 +0200 fix PluggableTransportModuleIT.java to use local disco and detach port limit for node local disco commit b6706eddfa04c43947c16551359ae98a463d34aa Author: Robert Muir <rmuir@apache.org> Date: Wed Aug 19 14:16:03 2015 -0400 Default to unicast discovery, with default host list of 127.0.0.1, [::1]
h1. Elasticsearch h2. A Distributed RESTful Search Engine h3. "https://www.elastic.co/products/elasticsearch":https://www.elastic.co/products/elasticsearch Elasticsearch is a distributed RESTful search engine built for the cloud. Features include: * Distributed and Highly Available Search Engine. ** Each index is fully sharded with a configurable number of shards. ** Each shard can have one or more replicas. ** Read / Search operations performed on either one of the replica shard. * Multi Tenant with Multi Types. ** Support for more than one index. ** Support for more than one type per index. ** Index level configuration (number of shards, index storage, ...). * Various set of APIs ** HTTP RESTful API ** Native Java API. ** All APIs perform automatic node operation rerouting. * Document oriented ** No need for upfront schema definition. ** Schema can be defined per type for customization of the indexing process. * Reliable, Asynchronous Write Behind for long term persistency. * (Near) Real Time Search. * Built on top of Lucene ** Each shard is a fully functional Lucene index ** All the power of Lucene easily exposed through simple configuration / plugins. * Per operation consistency ** Single document level operations are atomic, consistent, isolated and durable. * Open Source under the Apache License, version 2 ("ALv2") h2. Getting Started First of all, DON'T PANIC. It will take 5 minutes to get the gist of what Elasticsearch is all about. h3. Requirements You need to have a recent version of Java installed. See the "Setup":http://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html#jvm-version page for more information. h3. Installation * "Download":https://www.elastic.co/downloads/elasticsearch and unzip the Elasticsearch official distribution. * Run @bin/elasticsearch@ on unix, or @bin\elasticsearch.bat@ on windows. * Run @curl -X GET http://localhost:9200/@. * Start more servers ... h3. Indexing Let's try and index some twitter like information. First, let's create a twitter user, and add some tweets (the @twitter@ index will be created automatically): <pre> curl -XPUT 'http://localhost:9200/twitter/user/kimchy' -d '{ "name" : "Shay Banon" }' curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d ' { "user": "kimchy", "postDate": "2009-11-15T13:12:00", "message": "Trying out Elasticsearch, so far so good?" }' curl -XPUT 'http://localhost:9200/twitter/tweet/2' -d ' { "user": "kimchy", "postDate": "2009-11-15T14:12:12", "message": "Another tweet, will it be indexed?" }' </pre> Now, let's see if the information was added by GETting it: <pre> curl -XGET 'http://localhost:9200/twitter/user/kimchy?pretty=true' curl -XGET 'http://localhost:9200/twitter/tweet/1?pretty=true' curl -XGET 'http://localhost:9200/twitter/tweet/2?pretty=true' </pre> h3. Searching Mmm search..., shouldn't it be elastic? Let's find all the tweets that @kimchy@ posted: <pre> curl -XGET 'http://localhost:9200/twitter/tweet/_search?q=user:kimchy&pretty=true' </pre> We can also use the JSON query language Elasticsearch provides instead of a query string: <pre> curl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty=true' -d ' { "query" : { "match" : { "user": "kimchy" } } }' </pre> Just for kicks, let's get all the documents stored (we should see the user as well): <pre> curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d ' { "query" : { "matchAll" : {} } }' </pre> We can also do range search (the @postDate@ was automatically identified as date) <pre> curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d ' { "query" : { "range" : { "postDate" : { "from" : "2009-11-15T13:00:00", "to" : "2009-11-15T14:00:00" } } } }' </pre> There are many more options to perform search, after all, it's a search product no? All the familiar Lucene queries are available through the JSON query language, or through the query parser. h3. Multi Tenant - Indices and Types Maan, that twitter index might get big (in this case, index size == valuation). Let's see if we can structure our twitter system a bit differently in order to support such large amounts of data. Elasticsearch supports multiple indices, as well as multiple types per index. In the previous example we used an index called @twitter@, with two types, @user@ and @tweet@. Another way to define our simple twitter system is to have a different index per user (note, though that each index has an overhead). Here is the indexing curl's in this case: <pre> curl -XPUT 'http://localhost:9200/kimchy/info/1' -d '{ "name" : "Shay Banon" }' curl -XPUT 'http://localhost:9200/kimchy/tweet/1' -d ' { "user": "kimchy", "postDate": "2009-11-15T13:12:00", "message": "Trying out Elasticsearch, so far so good?" }' curl -XPUT 'http://localhost:9200/kimchy/tweet/2' -d ' { "user": "kimchy", "postDate": "2009-11-15T14:12:12", "message": "Another tweet, will it be indexed?" }' </pre> The above will index information into the @kimchy@ index, with two types, @info@ and @tweet@. Each user will get his own special index. Complete control on the index level is allowed. As an example, in the above case, we would want to change from the default 5 shards with 1 replica per index, to only 1 shard with 1 replica per index (== per twitter user). Here is how this can be done (the configuration can be in yaml as well): <pre> curl -XPUT http://localhost:9200/another_user/ -d ' { "index" : { "numberOfShards" : 1, "numberOfReplicas" : 1 } }' </pre> Search (and similar operations) are multi index aware. This means that we can easily search on more than one index (twitter user), for example: <pre> curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -d ' { "query" : { "matchAll" : {} } }' </pre> Or on all the indices: <pre> curl -XGET 'http://localhost:9200/_search?pretty=true' -d ' { "query" : { "matchAll" : {} } }' </pre> {One liner teaser}: And the cool part about that? You can easily search on multiple twitter users (indices), with different boost levels per user (index), making social search so much simpler (results from my friends rank higher than results from friends of my friends). h3. Distributed, Highly Available Let's face it, things will fail.... Elasticsearch is a highly available and distributed search engine. Each index is broken down into shards, and each shard can have one or more replica. By default, an index is created with 5 shards and 1 replica per shard (5/1). There are many topologies that can be used, including 1/10 (improve search performance), or 20/1 (improve indexing performance, with search executed in a map reduce fashion across shards). In order to play with the distributed nature of Elasticsearch, simply bring more nodes up and shut down nodes. The system will continue to serve requests (make sure you use the correct http port) with the latest data indexed. h3. Where to go from here? We have just covered a very small portion of what Elasticsearch is all about. For more information, please refer to the "elastic.co":http://www.elastic.co/products/elasticsearch website. h3. Building from Source Elasticsearch uses "Maven":http://maven.apache.org for its build system. In order to create a distribution, simply run the @mvn clean package -DskipTests@ command in the cloned directory. The distribution will be created under @target/releases@. See the "TESTING":TESTING.asciidoc file for more information about running the Elasticsearch test suite. h3. Upgrading to Elasticsearch 1.x? In order to ensure a smooth upgrade process from earlier versions of Elasticsearch (< 1.0.0), it is recommended to perform a full cluster restart. Please see the "setup reference":https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html for more details on the upgrade process. h1. License <pre> This software is licensed under the Apache License, version 2 ("ALv2"), quoted below. Copyright 2009-2015 Elasticsearch <https://www.elastic.co> Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. </pre>