🔎 Open source distributed and RESTful search engine.
Go to file
Jason Tedor 2bcdcb17cd
Introduce dedicated ingest processor exception (#48810)
Today we wrap exceptions that occur while executing an ingest processor
in an ElasticsearchException. Today, in ExceptionsHelper#unwrapCause we
only unwrap causes for exceptions that implement
ElasticsearchWrapperException, which the top-level
ElasticsearchException does not. Ultimately, this means that any
exception that occurs during processor execution does not have its cause
unwrapped, and so its status is blanket treated as a 500. This means
that while executing a bulk request with an ingest pipeline,
document-level failures that occur during a processor will cause the
status for that document to be treated as 500. Since that does not give
the client any indication that they made a mistake, it means some
clients will enter infinite retries, thinking that there is some
server-side problem that merely needs to clear. This commit addresses
this by introducing a dedicated ingest processor exception, so that its
causes can be unwrapped. While we could consider a broader change to
unwrap causes for more than just ElasticsearchWrapperExceptions, that is
a broad change with unclear implications. Since the problem of reporting
500s on client errors is a user-facing bug, we take the conservative
approach for now, and we can revisit the unwrapping in a future change.
2019-11-14 11:04:53 -05:00
.ci Apply 2-space indent to all gradle scripts (#49071) 2019-11-14 11:01:23 +00:00
.github Add version command to issue template 2017-07-31 08:55:31 +09:00
benchmarks Apply 2-space indent to all gradle scripts (#49071) 2019-11-14 11:01:23 +00:00
buildSrc Apply 2-space indent to all gradle scripts (#49071) 2019-11-14 11:01:23 +00:00
client Apply 2-space indent to all gradle scripts (#49071) 2019-11-14 11:01:23 +00:00
dev-tools Deprecate the pidfile setting (#45938) 2019-08-23 21:31:35 -04:00
distribution Apply 2-space indent to all gradle scripts (#49071) 2019-11-14 11:01:23 +00:00
docs [DOCS] Reformat update license API docs (#48967) 2019-11-14 08:00:42 -05:00
gradle Apply 2-space indent to all gradle scripts (#49071) 2019-11-14 11:01:23 +00:00
libs Apply 2-space indent to all gradle scripts (#49071) 2019-11-14 11:01:23 +00:00
licenses Reorganize license files 2018-04-20 15:33:59 -07:00
modules Introduce dedicated ingest processor exception (#48810) 2019-11-14 11:04:53 -05:00
plugins Apply 2-space indent to all gradle scripts (#49071) 2019-11-14 11:01:23 +00:00
qa Apply 2-space indent to all gradle scripts (#49071) 2019-11-14 11:01:23 +00:00
rest-api-spec Scripting: fill in get contexts REST API (#48319) (#48602) 2019-10-29 14:41:15 -06:00
server Introduce dedicated ingest processor exception (#48810) 2019-11-14 11:04:53 -05:00
test Apply 2-space indent to all gradle scripts (#49071) 2019-11-14 11:01:23 +00:00
x-pack [7.x] Validate monitoring password at parse time (#49083) 2019-11-14 09:39:28 -06:00
.dir-locals.el Go back to 140 column limit in .dir-locals.el 2017-04-14 08:50:53 -06:00
.eclipseformat.xml Enable spotless for enrich gradle project in 7 dot x branch. (#48976) 2019-11-12 13:22:34 +01:00
.editorconfig Apply 2-space indent to all gradle scripts (#49071) 2019-11-14 11:01:23 +00:00
.gitattributes Add a CHANGELOG file for release notes. (#29450) 2018-04-18 07:42:05 -07:00
.gitignore Move periodic job to ES repo (#48570) 2019-11-13 17:12:42 +02:00
CONTRIBUTING.md Add negative boolean expression note to CONTRIBUTING.md (#49033) 2019-11-13 14:33:41 +01:00
LICENSE.txt Clarify mixed license text (#45637) 2019-08-16 13:39:12 -04:00
NOTICE.txt Restore date aggregation performance in UTC case (#38221) (#38700) 2019-02-11 16:30:48 +03:00
README.textile [Docs] Correct README example snippet (#45133) 2019-08-02 16:53:49 +02:00
TESTING.asciidoc Detail the IDEs options for configuring the debug step (#48507) 2019-10-25 17:27:48 +03:00
Vagrantfile Add Docker packaging tests on 7.x (#48857) 2019-11-05 15:17:59 +00:00
build.gradle Apply 2-space indent to all gradle scripts (#49071) 2019-11-14 11:01:23 +00:00
gradle.properties Testclusters: improove timeout handling (#43440) 2019-07-01 11:39:53 +03:00
gradlew Upgrade to Gradle 5.6 (#45005) 2019-09-12 16:18:41 +03:00
gradlew.bat Upgrade to Gradle 5.5 (#43788) (#43832) 2019-07-01 11:54:58 -07:00
settings.gradle Apply 2-space indent to all gradle scripts (#49071) 2019-11-14 11:01:23 +00:00

README.textile

h1. Elasticsearch

h2. A Distributed RESTful Search Engine

h3. "https://www.elastic.co/products/elasticsearch":https://www.elastic.co/products/elasticsearch

Elasticsearch is a distributed RESTful search engine built for the cloud. Features include:

* Distributed and Highly Available Search Engine.
** Each index is fully sharded with a configurable number of shards.
** Each shard can have one or more replicas.
** Read / Search operations performed on any of the replica shards.
* Multi Tenant.
** Support for more than one index.
** Index level configuration (number of shards, index storage, ...).
* Various set of APIs
** HTTP RESTful API
** Native Java API.
** All APIs perform automatic node operation rerouting.
* Document oriented
** No need for upfront schema definition.
** Schema can be defined for customization of the indexing process.
* Reliable, Asynchronous Write Behind for long term persistency.
* (Near) Real Time Search.
* Built on top of Lucene
** Each shard is a fully functional Lucene index
** All the power of Lucene easily exposed through simple configuration / plugins.
* Per operation consistency
** Single document level operations are atomic, consistent, isolated and durable.

h2. Getting Started

First of all, DON'T PANIC. It will take 5 minutes to get the gist of what Elasticsearch is all about.

h3. Requirements

You need to have a recent version of Java installed. See the "Setup":http://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html#jvm-version page for more information.

h3. Installation

* "Download":https://www.elastic.co/downloads/elasticsearch and unzip the Elasticsearch official distribution.
* Run @bin/elasticsearch@ on unix, or @bin\elasticsearch.bat@ on windows.
* Run @curl -X GET http://localhost:9200/@.
* Start more servers ...

h3. Indexing

Let's try and index some twitter like information. First, let's index some tweets (the @twitter@ index will be created automatically):

<pre>
curl -XPUT 'http://localhost:9200/twitter/_doc/1?pretty' -H 'Content-Type: application/json' -d '
{
    "user": "kimchy",
    "post_date": "2009-11-15T13:12:00",
    "message": "Trying out Elasticsearch, so far so good?"
}'

curl -XPUT 'http://localhost:9200/twitter/_doc/2?pretty' -H 'Content-Type: application/json' -d '
{
    "user": "kimchy",
    "post_date": "2009-11-15T14:12:12",
    "message": "Another tweet, will it be indexed?"
}'

curl -XPUT 'http://localhost:9200/twitter/_doc/3?pretty' -H 'Content-Type: application/json' -d '
{
    "user": "elastic",
    "post_date": "2010-01-15T01:46:38",
    "message": "Building the site, should be kewl"
}'
</pre>

Now, let's see if the information was added by GETting it:

<pre>
curl -XGET 'http://localhost:9200/twitter/_doc/1?pretty=true'
curl -XGET 'http://localhost:9200/twitter/_doc/2?pretty=true'
curl -XGET 'http://localhost:9200/twitter/_doc/3?pretty=true'
</pre>

h3. Searching

Mmm search..., shouldn't it be elastic?
Let's find all the tweets that @kimchy@ posted:

<pre>
curl -XGET 'http://localhost:9200/twitter/_search?q=user:kimchy&pretty=true'
</pre>

We can also use the JSON query language Elasticsearch provides instead of a query string:

<pre>
curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -H 'Content-Type: application/json' -d '
{
    "query" : {
        "match" : { "user": "kimchy" }
    }
}'
</pre>

Just for kicks, let's get all the documents stored (we should see the tweet from @elastic@ as well):

<pre>
curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -H 'Content-Type: application/json' -d '
{
    "query" : {
        "match_all" : {}
    }
}'
</pre>

We can also do range search (the @post_date@ was automatically identified as date)

<pre>
curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -H 'Content-Type: application/json' -d '
{
    "query" : {
        "range" : {
            "post_date" : { "from" : "2009-11-15T13:00:00", "to" : "2009-11-15T14:00:00" }
        }
    }
}'
</pre>

There are many more options to perform search, after all, it's a search product no? All the familiar Lucene queries are available through the JSON query language, or through the query parser.

h3. Multi Tenant - Indices and Types

Man, that twitter index might get big (in this case, index size == valuation). Let's see if we can structure our twitter system a bit differently in order to support such large amounts of data.

Elasticsearch supports multiple indices. In the previous example we used an index called @twitter@ that stored tweets for every user.

Another way to define our simple twitter system is to have a different index per user (note, though that each index has an overhead). Here is the indexing curl's in this case:

<pre>
curl -XPUT 'http://localhost:9200/kimchy/_doc/1?pretty' -H 'Content-Type: application/json' -d '
{
    "user": "kimchy",
    "post_date": "2009-11-15T13:12:00",
    "message": "Trying out Elasticsearch, so far so good?"
}'

curl -XPUT 'http://localhost:9200/kimchy/_doc/2?pretty' -H 'Content-Type: application/json' -d '
{
    "user": "kimchy",
    "post_date": "2009-11-15T14:12:12",
    "message": "Another tweet, will it be indexed?"
}'
</pre>

The above will index information into the @kimchy@ index. Each user will get their own special index.

Complete control on the index level is allowed. As an example, in the above case, we might want to change from the default 1 shards with 1 replica per index, to 2 shards with 1 replica per index (because this user tweets a lot). Here is how this can be done (the configuration can be in yaml as well):

<pre>
curl -XPUT http://localhost:9200/another_user?pretty -H 'Content-Type: application/json' -d '
{
    "settings" : {
        "index.number_of_shards" : 2,
        "index.number_of_replicas" : 1
    }
}'
</pre>

Search (and similar operations) are multi index aware. This means that we can easily search on more than one
index (twitter user), for example:

<pre>
curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -H 'Content-Type: application/json' -d '
{
    "query" : {
        "match_all" : {}
    }
}'
</pre>

Or on all the indices:

<pre>
curl -XGET 'http://localhost:9200/_search?pretty=true' -H 'Content-Type: application/json' -d '
{
    "query" : {
        "match_all" : {}
    }
}'
</pre>

{One liner teaser}: And the cool part about that? You can easily search on multiple twitter users (indices), with different boost levels per user (index), making social search so much simpler (results from my friends rank higher than results from friends of my friends).

h3. Distributed, Highly Available

Let's face it, things will fail....

Elasticsearch is a highly available and distributed search engine. Each index is broken down into shards, and each shard can have one or more replicas. By default, an index is created with 5 shards and 1 replica per shard (5/1). There are many topologies that can be used, including 1/10 (improve search performance), or 20/1 (improve indexing performance, with search executed in a map reduce fashion across shards).

In order to play with the distributed nature of Elasticsearch, simply bring more nodes up and shut down nodes. The system will continue to serve requests (make sure you use the correct http port) with the latest data indexed.

h3. Where to go from here?

We have just covered a very small portion of what Elasticsearch is all about. For more information, please refer to the "elastic.co":http://www.elastic.co/products/elasticsearch website. General questions can be asked on the "Elastic Discourse forum":https://discuss.elastic.co or on IRC on Freenode at "#elasticsearch":https://webchat.freenode.net/#elasticsearch. The Elasticsearch GitHub repository is reserved for bug reports and feature requests only.

h3. Building from Source

Elasticsearch uses "Gradle":https://gradle.org for its build system.

In order to create a distribution, simply run the @./gradlew assemble@ command in the cloned directory.

The distribution for each project will be created under the @build/distributions@ directory in that project.

See the "TESTING":TESTING.asciidoc file for more information about running the Elasticsearch test suite.

h3. Upgrading from older Elasticsearch versions

In order to ensure a smooth upgrade process from earlier versions of Elasticsearch, please see our "upgrade documentation":https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html for more details on the upgrade process.