Cleanup comments and class names s/ElasticSearch/Elasticsearch

* Clean up s/ElasticSearch/Elasticsearch on docs/*
 * Clean up s/ElasticSearch/Elasticsearch on src/* bin/* & pom.xml
 * Clean up s/ElasticSearch/Elasticsearch on NOTICE.txt and README.textile

Closes #4634
This commit is contained in:
Simon Willnauer 2014-01-06 21:58:46 +01:00
parent 8d9af7e7a5
commit fa16969360
680 changed files with 2381 additions and 2409 deletions

View File

@ -1,5 +1,5 @@
ElasticSearch
Copyright 2009-2014 ElasticSearch and Shay Banon
Elasticsearch
Copyright 2009-2014 Elasticsearch and Shay Banon
This product includes software developed by The Apache Software
Foundation (http://www.apache.org/).

View File

@ -1,10 +1,10 @@
h1. ElasticSearch
h1. Elasticsearch
h2. A Distributed RESTful Search Engine
h3. "http://www.elasticsearch.org":http://www.elasticsearch.org
ElasticSearch is a distributed RESTful search engine built for the cloud. Features include:
Elasticsearch is a distributed RESTful search engine built for the cloud. Features include:
* Distributed and Highly Available Search Engine.
** Each index is fully sharded with a configurable number of shards.
@ -32,11 +32,11 @@ ElasticSearch is a distributed RESTful search engine built for the cloud. Featur
h2. Getting Started
First of all, DON'T PANIC. It will take 5 minutes to get the gist of what ElasticSearch is all about.
First of all, DON'T PANIC. It will take 5 minutes to get the gist of what Elasticsearch is all about.
h3. Installation
* "Download":http://www.elasticsearch.org/download and unzip the ElasticSearch official distribution.
* "Download":http://www.elasticsearch.org/download and unzip the Elasticsearch official distribution.
* Run @bin/elasticsearch -f@ on unix, or @bin/elasticsearch.bat@ on windows.
* Run @curl -X GET http://localhost:9200/@.
* Start more servers ...
@ -52,7 +52,7 @@ curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T13:12:00",
"message": "Trying out Elastic Search, so far so good?"
"message": "Trying out Elasticsearch, so far so good?"
}'
curl -XPUT 'http://localhost:9200/twitter/tweet/2' -d '
@ -80,7 +80,7 @@ Lets find all the tweets that @kimchy@ posted:
curl -XGET 'http://localhost:9200/twitter/tweet/_search?q=user:kimchy&pretty=true'
</pre>
We can also use the JSON query language ElasticSearch provides instead of a query string:
We can also use the JSON query language Elasticsearch provides instead of a query string:
<pre>
curl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty=true' -d '
@ -121,7 +121,7 @@ h3. Multi Tenant - Indices and Types
Maan, that twitter index might get big (in this case, index size == valuation). Lets see if we can structure our twitter system a bit differently in order to support such large amount of data.
ElasticSearch support multiple indices, as well as multiple types per index. In the previous example we used an index called @twitter@, with two types, @user@ and @tweet@.
Elasticsearch support multiple indices, as well as multiple types per index. In the previous example we used an index called @twitter@, with two types, @user@ and @tweet@.
Another way to define our simple twitter system is to have a different index per user (though note that an index has an overhead). Here is the indexing curl's in this case:
@ -132,7 +132,7 @@ curl -XPUT 'http://localhost:9200/kimchy/tweet/1' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T13:12:00",
"message": "Trying out Elastic Search, so far so good?"
"message": "Trying out Elasticsearch, so far so good?"
}'
curl -XPUT 'http://localhost:9200/kimchy/tweet/2' -d '
@ -186,17 +186,17 @@ h3. Distributed, Highly Available
Lets face it, things will fail....
ElasticSearch is a highly available and distributed search engine. Each index is broken down into shards, and each shard can have one or more replica. By default, an index is created with 5 shards and 1 replica per shard (5/1). There are many topologies that can be used, including 1/10 (improve search performance), or 20/1 (improve indexing performance, with search executed in a map reduce fashion across shards).
Elasticsearch is a highly available and distributed search engine. Each index is broken down into shards, and each shard can have one or more replica. By default, an index is created with 5 shards and 1 replica per shard (5/1). There are many topologies that can be used, including 1/10 (improve search performance), or 20/1 (improve indexing performance, with search executed in a map reduce fashion across shards).
In order to play with Elastic Search distributed nature, simply bring more nodes up and shut down nodes. The system will continue to serve requests (make sure you use the correct http port) with the latest data indexed.
In order to play with Elasticsearch distributed nature, simply bring more nodes up and shut down nodes. The system will continue to serve requests (make sure you use the correct http port) with the latest data indexed.
h3. Where to go from here?
We have just covered a very small portion of what ElasticSearch is all about. For more information, please refer to the "elasticsearch.org":http://www.elasticsearch.org website.
We have just covered a very small portion of what Elasticsearch is all about. For more information, please refer to the "elasticsearch.org":http://www.elasticsearch.org website.
h3. Building from Source
ElasticSearch uses "Maven":http://maven.apache.org for its build system.
Elasticsearch uses "Maven":http://maven.apache.org for its build system.
In order to create a distribution, simply run the @mvn clean package
-DskipTests@ command in the cloned directory.
@ -211,7 +211,7 @@ h1. License
<pre>
This software is licensed under the Apache 2 license, quoted below.
Copyright 2009-2013 Shay Banon and ElasticSearch <http://www.elasticsearch.org>
Copyright 2009-2013 Shay Banon and Elasticsearch <http://www.elasticsearch.org>
Licensed under the Apache License, Version 2.0 (the "License"); you may not
use this file except in compliance with the License. You may obtain a copy of

View File

@ -135,17 +135,17 @@ launch_service()
es_parms="$es_parms -Des.pidfile=$pidpath"
fi
# The es-foreground option will tell ElasticSearch not to close stdout/stderr, but it's up to us not to daemonize.
# The es-foreground option will tell Elasticsearch not to close stdout/stderr, but it's up to us not to daemonize.
if [ "x$daemonized" = "x" ]; then
es_parms="$es_parms -Des.foreground=yes"
exec "$JAVA" $JAVA_OPTS $ES_JAVA_OPTS $es_parms -Des.path.home="$ES_HOME" -cp "$ES_CLASSPATH" $props \
org.elasticsearch.bootstrap.ElasticSearch
org.elasticsearch.bootstrap.Elasticsearch
# exec without running it in the background, makes it replace this shell, we'll never get here...
# no need to return something
else
# Startup ElasticSearch, background it, and write the pid.
# Startup Elasticsearch, background it, and write the pid.
exec "$JAVA" $JAVA_OPTS $ES_JAVA_OPTS $es_parms -Des.path.home="$ES_HOME" -cp "$ES_CLASSPATH" $props \
org.elasticsearch.bootstrap.ElasticSearch <&- &
org.elasticsearch.bootstrap.Elasticsearch <&- &
return $?
fi
}

View File

@ -65,7 +65,7 @@ REM JAVA_OPTS=%JAVA_OPTS% -XX:HeapDumpPath=$ES_HOME/logs/heapdump.hprof
set ES_CLASSPATH=%ES_CLASSPATH%;%ES_HOME%/lib/${project.build.finalName}.jar;%ES_HOME%/lib/*;%ES_HOME%/lib/sigar/*
set ES_PARAMS=-Delasticsearch -Des-foreground=yes -Des.path.home="%ES_HOME%"
"%JAVA_HOME%\bin\java" %JAVA_OPTS% %ES_JAVA_OPTS% %ES_PARAMS% %* -cp "%ES_CLASSPATH%" "org.elasticsearch.bootstrap.ElasticSearch"
"%JAVA_HOME%\bin\java" %JAVA_OPTS% %ES_JAVA_OPTS% %ES_PARAMS% %* -cp "%ES_CLASSPATH%" "org.elasticsearch.bootstrap.Elasticsearch"
goto finally
@ -76,4 +76,4 @@ pause
:finally
ENDLOCAL
ENDLOCAL

View File

@ -179,7 +179,7 @@ if not "%ES_JAVA_OPTS%" == "" set JVM_OPTS=%JVM_OPTS%;%JVM_ES_JAVA_OPTS%
if "%ES_START_TYPE%" == "" set ES_START_TYPE=manual
if "%ES_STOP_TIMEOUT%" == "" set ES_STOP_TIMEOUT=0
"%EXECUTABLE%" //IS//%SERVICE_ID% --Startup %ES_START_TYPE% --StopTimeout %ES_STOP_TIMEOUT% --StartClass org.elasticsearch.bootstrap.ElasticSearch --StopClass org.elasticsearch.bootstrap.ElasticSearch --StartMethod main --StopMethod close --Classpath "%ES_CLASSPATH%" --JvmSs %JVM_SS% --JvmMs %JVM_XMS% --JvmMx %JVM_XMX% --JvmOptions %JVM_OPTS% ++JvmOptions %ES_PARAMS% %LOG_OPTS% --PidFile "%SERVICE_ID%.pid" --DisplayName "Elasticsearch %ES_VERSION% (%SERVICE_ID%)" --Description "Elasticsearch %ES_VERSION% Windows Service - http://elasticsearch.org" --Jvm "%JVM_DLL%" --StartMode jvm --StopMode jvm --StartPath "%ES_HOME%"
"%EXECUTABLE%" //IS//%SERVICE_ID% --Startup %ES_START_TYPE% --StopTimeout %ES_STOP_TIMEOUT% --StartClass org.elasticsearch.bootstrap.Elasticsearch --StopClass org.elasticsearch.bootstrap.Elasticsearch --StartMethod main --StopMethod close --Classpath "%ES_CLASSPATH%" --JvmSs %JVM_SS% --JvmMs %JVM_XMS% --JvmMx %JVM_XMX% --JvmOptions %JVM_OPTS% ++JvmOptions %ES_PARAMS% %LOG_OPTS% --PidFile "%SERVICE_ID%.pid" --DisplayName "Elasticsearch %ES_VERSION% (%SERVICE_ID%)" --Description "Elasticsearch %ES_VERSION% Windows Service - http://elasticsearch.org" --Jvm "%JVM_DLL%" --StartMode jvm --StopMode jvm --StartPath "%ES_HOME%"
if not errorlevel 1 goto installed
@ -228,4 +228,4 @@ set /a conv=%conv% * 1024
set "%~2=%conv%"
goto:eof
ENDLOCAL
ENDLOCAL

View File

@ -1,4 +1,4 @@
##################### ElasticSearch Configuration Example #####################
##################### Elasticsearch Configuration Example #####################
# This file contains an overview of various configuration settings,
# targeted at operations staff. Application developers should
@ -7,7 +7,7 @@
# The installation procedure is covered at
# <http://elasticsearch.org/guide/en/elasticsearch/reference/current/setup.html>.
#
# ElasticSearch comes with reasonable defaults for most settings,
# Elasticsearch comes with reasonable defaults for most settings,
# so you can try it out without bothering with configuration.
#
# Most of the time, these defaults are just fine for running a production
@ -128,7 +128,7 @@
# The "number_of_replicas" can be increased or decreased anytime,
# by using the Index Update Settings API.
#
# ElasticSearch takes care about load balancing, relocating, gathering the
# Elasticsearch takes care about load balancing, relocating, gathering the
# results from nodes, etc. Experiment with different settings to fine-tune
# your setup.
@ -174,7 +174,7 @@
################################### Memory ####################################
# ElasticSearch performs poorly when JVM starts swapping: you should ensure that
# Elasticsearch performs poorly when JVM starts swapping: you should ensure that
# it _never_ swaps.
#
# Set this property to true to lock the memory:
@ -183,15 +183,15 @@
# Make sure that the ES_MIN_MEM and ES_MAX_MEM environment variables are set
# to the same value, and that the machine has enough memory to allocate
# for ElasticSearch, leaving enough memory for the operating system itself.
# for Elasticsearch, leaving enough memory for the operating system itself.
#
# You should also make sure that the ElasticSearch process is allowed to lock
# You should also make sure that the Elasticsearch process is allowed to lock
# the memory, eg. by using `ulimit -l unlimited`.
############################## Network And HTTP ###############################
# ElasticSearch, by default, binds itself to the 0.0.0.0 address, and listens
# Elasticsearch, by default, binds itself to the 0.0.0.0 address, and listens
# on port [9200-9300] for HTTP traffic and on port [9300-9400] for node-to-node
# communication. (the range means that if the port is busy, it will automatically
# try the next port).

View File

@ -16,13 +16,13 @@ See the {client}/python-api/current/index.html[official Elasticsearch Python cli
Python client.
* https://github.com/eriky/ESClient[ESClient]:
A lightweight and easy to use Python client for ElasticSearch.
A lightweight and easy to use Python client for Elasticsearch.
* https://github.com/humangeo/rawes[rawes]:
Python low level client.
* https://github.com/mozilla/elasticutils/[elasticutils]:
A friendly chainable ElasticSearch interface for Python.
A friendly chainable Elasticsearch interface for Python.
* http://intridea.github.io/surfiki-refine-elasticsearch/[Surfiki Refine]:
Python Map-Reduce engine targeting Elasticsearch indices.
@ -76,13 +76,13 @@ See the {client}/php-api/current/index.html[official Elasticsearch PHP client].
See the {client}/javascript-api/current/index.html[official Elasticsearch JavaScript client].
* https://github.com/fullscale/elastic.js[Elastic.js]:
A JavaScript implementation of the ElasticSearch Query DSL and Core API.
A JavaScript implementation of the Elasticsearch Query DSL and Core API.
* https://github.com/phillro/node-elasticsearch-client[node-elasticsearch-client]:
A NodeJS client for elastic search.
A NodeJS client for Elasticsearch.
* https://github.com/ramv/node-elastical[node-elastical]:
Node.js client for the ElasticSearch REST API
Node.js client for the Elasticsearch REST API
* https://github.com/printercu/elastics[elastics]: Simple tiny client that just works
@ -181,6 +181,6 @@ See the {client}/javascript-api/current/index.html[official Elasticsearch JavaSc
[[community-cold-fusion]]
=== Cold Fusion
* https://github.com/jasonfill/ColdFusion-ElasticSearch-Client[ColdFusion-ElasticSearch-Client]
* https://github.com/jasonfill/ColdFusion-ElasticSearch-Client[ColdFusion-Elasticsearch-Client]
Cold Fusion client for Elasticsearch

View File

@ -5,7 +5,7 @@
Chrome curl-like plugin for runninq requests against an Elasticsearch node
* https://github.com/mobz/elasticsearch-head[elasticsearch-head]:
A web front end for an elastic search cluster.
A web front end for an Elasticsearch cluster.
* https://github.com/OlegKunitsyn/elasticsearch-browser[browser]:
Web front-end over elasticsearch data.

View File

@ -3,19 +3,19 @@
* http://grails.org/plugin/elasticsearch[Grails]:
ElasticSearch Grails plugin.
Elasticsearch Grails plugin.
* https://github.com/carrot2/elasticsearch-carrot2[carrot2]:
Results clustering with carrot2
* https://github.com/angelf/escargot[escargot]:
ElasticSearch connector for Rails (WIP).
Elasticsearch connector for Rails (WIP).
* https://metacpan.org/module/Catalyst::Model::Search::ElasticSearch[Catalyst]:
ElasticSearch and Catalyst integration.
* https://metacpan.org/module/Catalyst::Model::Search::Elasticsearch[Catalyst]:
Elasticsearch and Catalyst integration.
* http://github.com/aparo/django-elasticsearch[django-elasticsearch]:
Django ElasticSearch Backend.
Django Elasticsearch Backend.
* http://github.com/Aconex/elasticflume[elasticflume]:
http://github.com/cloudera/flume[Flume] sink implementation.
@ -33,7 +33,7 @@
Symfony2 Bundle wrapping Elastica.
* http://drupal.org/project/elasticsearch[Drupal]:
Drupal ElasticSearch integration.
Drupal Elasticsearch integration.
* https://github.com/refuge/couch_es[couch_es]:
elasticsearch helper for couchdb based products (apache couchdb, bigcouch & refuge)
@ -51,11 +51,11 @@
Elasticsearch Java annotations for unit testing with
http://www.junit.org/[JUnit]
* http://searchbox-io.github.com/wp-elasticsearch/[Wp-ElasticSearch]:
ElasticSearch WordPress Plugin
* http://searchbox-io.github.com/wp-elasticsearch/[Wp-Elasticsearch]:
Elasticsearch WordPress Plugin
* https://github.com/OlegKunitsyn/eslogd[eslogd]:
Linux daemon that replicates events to a central ElasticSearch server in real-time
Linux daemon that replicates events to a central Elasticsearch server in real-time
* https://github.com/drewr/elasticsearch-clojure-repl[elasticsearch-clojure-repl]:
Plugin that embeds nREPL for run-time introspective adventure! Also
@ -65,11 +65,11 @@
Modular search for Django
* https://github.com/cleverage/play2-elasticsearch[play2-elasticsearch]:
ElasticSearch module for Play Framework 2.x
Elasticsearch module for Play Framework 2.x
* https://github.com/fullscale/dangle[dangle]:
A set of AngularJS directives that provide common visualizations for elasticsearch based on
D3.
* https://github.com/roundscope/ember-data-elasticsearch-kit[ember-data-elasticsearch-kit]:
An ember-data kit for both pushing and querying objects to ElasticSearch cluster
An ember-data kit for both pushing and querying objects to Elasticsearch cluster

View File

@ -11,7 +11,7 @@
RPMs for elasticsearch.
* http://www.github.com/neogenix/daikon[daikon]:
Daikon ElasticSearch CLI
Daikon Elasticsearch CLI
* https://github.com/Aconex/scrutineer[Scrutineer]:
A high performance consistency checker to compare what you've indexed

View File

@ -7,10 +7,10 @@
* https://github.com/karmi/elasticsearch-paramedic[paramedic]:
Live charts with cluster stats and indices/shards information.
* http://www.elastichq.org/[ElasticSearchHQ]:
* http://www.elastichq.org/[ElasticsearchHQ]:
Free cluster health monitoring tool
* http://sematext.com/spm/index.html[SPM for ElasticSearch]:
* http://sematext.com/spm/index.html[SPM for Elasticsearch]:
Performance monitoring with live charts showing cluster and node stats, integrated
alerts, email reports, etc.
@ -18,11 +18,11 @@
Nagios/Shinken plugins for checking on elasticsearch
* https://github.com/anchor/nagios-plugin-elasticsearch[check_elasticsearch]:
An ElasticSearch availability and performance monitoring plugin for
An Elasticsearch availability and performance monitoring plugin for
Nagios.
* https://github.com/rbramley/Opsview-elasticsearch[opsview-elasticsearch]:
Opsview plugin written in Perl for monitoring ElasticSearch
Opsview plugin written in Perl for monitoring Elasticsearch
* https://github.com/polyfractal/elasticsearch-segmentspy[SegmentSpy]:
Plugin to watch Lucene segment merges across your cluster

View File

@ -2,7 +2,7 @@
== API Anatomy
Once a <<client,GClient>> has been
obtained, all of ElasticSearch APIs can be executed on it. Each Groovy
obtained, all of Elasticsearch APIs can be executed on it. Each Groovy
API is exposed using three different mechanisms.

View File

@ -16,7 +16,7 @@ bulkRequest.add(client.prepareIndex("twitter", "tweet", "1")
.startObject()
.field("user", "kimchy")
.field("postDate", new Date())
.field("message", "trying out Elastic Search")
.field("message", "trying out Elasticsearch")
.endObject()
)
);

View File

@ -37,7 +37,7 @@ dates regarding to the
String json = "{" +
"\"user\":\"kimchy\"," +
"\"postDate\":\"2013-01-30\"," +
"\"message\":\"trying out Elastic Search\"," +
"\"message\":\"trying out Elasticsearch\"," +
"}";
--------------------------------------------------
@ -53,7 +53,7 @@ structure:
Map<String, Object> json = new HashMap<String, Object>();
json.put("user","kimchy");
json.put("postDate",new Date());
json.put("message","trying out Elastic Search");
json.put("message","trying out Elasticsearch");
--------------------------------------------------
@ -104,7 +104,7 @@ XContentBuilder builder = jsonBuilder()
.startObject()
.field("user", "kimchy")
.field("postDate", new Date())
.field("message", "trying out Elastic Search")
.field("message", "trying out Elasticsearch")
.endObject()
--------------------------------------------------
@ -137,7 +137,7 @@ IndexResponse response = client.prepareIndex("twitter", "tweet", "1")
.startObject()
.field("user", "kimchy")
.field("postDate", new Date())
.field("message", "trying out Elastic Search")
.field("message", "trying out Elasticsearch")
.endObject()
)
.execute()
@ -152,7 +152,7 @@ don't have to give an ID:
String json = "{" +
"\"user\":\"kimchy\"," +
"\"postDate\":\"2013-01-30\"," +
"\"message\":\"trying out Elastic Search\"," +
"\"message\":\"trying out Elasticsearch\"," +
"}";
IndexResponse response = client.prepareIndex("twitter", "tweet")

View File

@ -21,7 +21,7 @@ The result of the above get operation is:
"_source" : {
"user" : "kimchy",
"postDate" : "2009-11-15T14:12:12",
"message" : "trying out Elastic Search"
"message" : "trying out Elasticsearch"
}
}
--------------------------------------------------

View File

@ -10,7 +10,7 @@ into the "twitter" index, under a type called "tweet" with an id of 1:
$ curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '{
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elastic Search"
"message" : "trying out Elasticsearch"
}'
--------------------------------------------------
@ -54,7 +54,7 @@ $ curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '{
"tweet" : {
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elastic Search"
"message" : "trying out Elasticsearch"
}
}'
--------------------------------------------------
@ -133,7 +133,7 @@ Here is an example of using the `op_type` parameter:
$ curl -XPUT 'http://localhost:9200/twitter/tweet/1?op_type=create' -d '{
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elastic Search"
"message" : "trying out Elasticsearch"
}'
--------------------------------------------------
@ -144,7 +144,7 @@ Another option to specify `create` is to use the following uri:
$ curl -XPUT 'http://localhost:9200/twitter/tweet/1/_create' -d '{
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elastic Search"
"message" : "trying out Elasticsearch"
}'
--------------------------------------------------
@ -161,7 +161,7 @@ will automatically be set to `create`. Here is an example (note the
$ curl -XPOST 'http://localhost:9200/twitter/tweet/' -d '{
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elastic Search"
"message" : "trying out Elasticsearch"
}'
--------------------------------------------------
@ -192,7 +192,7 @@ on a per-operation basis using the `routing` parameter. For example:
$ curl -XPOST 'http://localhost:9200/twitter/tweet?routing=kimchy' -d '{
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elastic Search"
"message" : "trying out Elasticsearch"
}'
--------------------------------------------------
@ -236,7 +236,7 @@ parameter. For example:
--------------------------------------------------
$ curl -XPUT localhost:9200/twitter/tweet/1?timestamp=2009-11-15T14%3A12%3A12 -d '{
"user" : "kimchy",
"message" : "trying out Elastic Search",
"message" : "trying out Elasticsearch",
}'
--------------------------------------------------
@ -348,6 +348,6 @@ to 5 minutes:
$ curl -XPUT 'http://localhost:9200/twitter/tweet/1?timeout=5m' -d '{
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elastic Search"
"message" : "trying out Elasticsearch"
}'
--------------------------------------------------

View File

@ -17,7 +17,7 @@ the `object` type and namely the root `object` type allow for schema
less dynamic addition of unmapped fields.
The default mapping definition is plain mapping definition that is
embedded within ElasticSearch:
embedded within Elasticsearch:
[source,js]
--------------------------------------------------

View File

@ -13,7 +13,7 @@ required since the local gateway constructs its state from the local
index state of each node.
Another important aspect of memory based storage is the fact that
ElasticSearch supports storing the index in memory *outside of the JVM
Elasticsearch supports storing the index in memory *outside of the JVM
heap space* using the "Memory" (see below) storage type. It translates
to the fact that there is no need for extra large JVM heaps (with their
own consequences) for storing the index in memory.

View File

@ -1,7 +1,7 @@
[[indices-create-index]]
== Create Index
The create index API allows to instantiate an index. ElasticSearch
The create index API allows to instantiate an index. Elasticsearch
provides support for multiple indices, including executing operations
across several indices. Each index created can have specific settings
associated with it.

View File

@ -5,7 +5,7 @@ The flush API allows to flush one or more indices through an API. The
flush process of an index basically frees memory from the index by
flushing data to the index storage and clearing the internal
<<index-modules-translog,transaction log>>. By
default, ElasticSearch uses memory heuristics in order to automatically
default, Elasticsearch uses memory heuristics in order to automatically
trigger flush operations as required in order to clear memory.
[source,js]

View File

@ -6,8 +6,8 @@
Mapping is the process of defining how a document should be mapped to
the Search Engine, including its searchable characteristics such as
which fields are searchable and if/how they are tokenized. In
ElasticSearch, an index may store documents of different "mapping
types". ElasticSearch allows one to associate multiple mapping
Elasticsearch, an index may store documents of different "mapping
types". Elasticsearch allows one to associate multiple mapping
definitions for each mapping type.
Explicit mapping is defined on an index/type level. By default, there

View File

@ -78,7 +78,7 @@ http://en.wikipedia.org/wiki/Quadtree[quadtree] for grid squares.
Similar to geohash, quad trees interleave the bits of the latitude and
longitude the resulting hash is a bit set. A tree level in a quad tree
represents 2 bits in this bit set, one for each coordinate. The maximum
amount of levels for the quad trees in elastic search is 50.
amount of levels for the quad trees in Elasticsearch is 50.
[float]
===== Accuracy
@ -124,7 +124,7 @@ Big, complex polygons can take up a lot of space at higher tree levels.
Which setting is right depends on the use case. Generally one trades off
accuracy against index size and query performance.
The defaults in elastic search for both implementations are a compromise
The defaults in Elasticsearch for both implementations are a compromise
between index size and a reasonable level of precision of 50m at the
equator. This allows for indexing tens of millions of shapes without
overly bloating the resulting index too much relative to the input size.

View File

@ -2,7 +2,7 @@
=== Object Type
JSON documents are hierarchical in nature, allowing them to define inner
"objects" within the actual JSON. ElasticSearch completely understands
"objects" within the actual JSON. Elasticsearch completely understands
the nature of these inner objects and can map them easily, providing
query support for their inner fields. Because each document can have
objects with different fields each time, objects mapped this way are
@ -72,7 +72,7 @@ An object mapping can optionally define one or more properties using the
[float]
==== dynamic
One of the most important features of ElasticSearch is its ability to be
One of the most important features of Elasticsearch is its ability to be
schema-less. This means that, in our example above, the `person` object
can be indexed later with a new property -- `age`, for example -- and it
will automatically be added to the mapping definitions. Same goes for

View File

@ -4,7 +4,7 @@
The discovery module is responsible for discovering nodes within a
cluster, as well as electing a master node.
Note, ElasticSearch is a peer to peer based system, nodes communicate
Note, Elasticsearch is a peer to peer based system, nodes communicate
with one another directly if operations are delegated / broadcast. All
the main APIs (index, delete, search) do not communicate with the master
node. The responsibility of the master node is to maintain the global

View File

@ -23,7 +23,7 @@ gateway:
The location where the gateway stores the cluster state can be set using
the `gateway.fs.location` setting. By default, it will be stored under
the `work` directory. Note, the `work` directory is considered a
temporal directory with ElasticSearch (meaning it is safe to `rm -rf`
temporal directory with Elasticsearch (meaning it is safe to `rm -rf`
it), the default location of the persistent gateway in work intentional,
*it should be changed*.

View File

@ -230,7 +230,7 @@ bin/plugin --install mobz/elasticsearch-head --timeout 0
.Supported by the community
* https://github.com/lukas-vlcek/bigdesk[BigDesk Plugin] (by Lukáš Vlček)
* https://github.com/mobz/elasticsearch-head[Elasticsearch Head Plugin] (by Ben Birch)
* https://github.com/royrusso/elasticsearch-HQ[ElasticSearch HQ] (by Roy Russo)
* https://github.com/royrusso/elasticsearch-HQ[Elasticsearch HQ] (by Roy Russo)
* https://github.com/andrewvc/elastic-hammer[Hammer Plugin] (by Andrew Cholakian)
* https://github.com/polyfractal/elasticsearch-inquisitor[Inquisitor Plugin] (by Zachary Tong)
* https://github.com/karmi/elasticsearch-paramedic[Paramedic Plugin] (by Karel Minařík)
@ -247,14 +247,14 @@ bin/plugin --install mobz/elasticsearch-head --timeout 0
.Supported by the community
* https://github.com/carrot2/elasticsearch-carrot2[carrot2 Plugin]: Results clustering with carrot2 (by Dawid Weiss)
* https://github.com/derryx/elasticsearch-changes-plugin[ElasticSearch Changes Plugin] (by Thomas Peuss)
* https://github.com/derryx/elasticsearch-changes-plugin[Elasticsearch Changes Plugin] (by Thomas Peuss)
* https://github.com/johtani/elasticsearch-extended-analyze[Extended Analyze Plugin] (by Jun Ohtani)
* https://github.com/spinscale/elasticsearch-graphite-plugin[ElasticSearch Graphite Plugin] (by Alexander Reelsen)
* https://github.com/mattweber/elasticsearch-mocksolrplugin[ElasticSearch Mock Solr Plugin] (by Matt Weber)
* https://github.com/viniciusccarvalho/elasticsearch-newrelic[ElasticSearch New Relic Plugin] (by Vinicius Carvalho)
* https://github.com/swoop-inc/elasticsearch-statsd-plugin[ElasticSearch Statsd Plugin] (by Swoop Inc.)
* https://github.com/spinscale/elasticsearch-graphite-plugin[Elasticsearch Graphite Plugin] (by Alexander Reelsen)
* https://github.com/mattweber/elasticsearch-mocksolrplugin[Elasticsearch Mock Solr Plugin] (by Matt Weber)
* https://github.com/viniciusccarvalho/elasticsearch-newrelic[Elasticsearch New Relic Plugin] (by Vinicius Carvalho)
* https://github.com/swoop-inc/elasticsearch-statsd-plugin[Elasticsearch Statsd Plugin] (by Swoop Inc.)
* https://github.com/endgameinc/elasticsearch-term-plugin[Terms Component Plugin] (by Endgame Inc.)
* http://tlrx.github.com/elasticsearch-view-plugin[ElasticSearch View Plugin] (by Tanguy Leroux)
* http://tlrx.github.com/elasticsearch-view-plugin[Elasticsearch View Plugin] (by Tanguy Leroux)
* https://github.com/sonian/elasticsearch-zookeeper[ZooKeeper Discovery Plugin] (by Sonian Inc.)

View File

@ -12,7 +12,7 @@ that there is no blocking thread waiting for a response. The benefit of
using asynchronous communication is first solving the
http://en.wikipedia.org/wiki/C10k_problem[C10k problem], as well as
being the idle solution for scatter (broadcast) / gather operations such
as search in ElasticSearch.
as search in Elasticsearch.
[float]
=== TCP Transport

View File

@ -21,7 +21,7 @@ when indexing tweets, the routing value can be the user name:
$ curl -XPOST 'http://localhost:9200/twitter/tweet?routing=kimchy' -d '{
"user" : "kimchy",
"postDate" : "2009-11-15T14:12:12",
"message" : "trying out Elastic Search"
"message" : "trying out Elasticsearch"
}
'
--------------------------------------------------

View File

@ -8,7 +8,7 @@ _Facets_ provide aggregated data based on a search query. In the
simplest case, a
<<search-facets-terms-facet,terms facet>>
can return _facet counts_ for various _facet values_ for a specific
_field_. ElasticSearch supports more facet implementations, such as
_field_. Elasticsearch supports more facet implementations, such as
<<search-facets-statistical-facet,statistical>>
or
<<search-facets-date-histogram-facet,date

View File

@ -35,7 +35,7 @@ And here is a sample response:
"_source" : {
"user" : "kimchy",
"postDate" : "2009-11-15T14:12:12",
"message" : "trying out Elastic Search"
"message" : "trying out Elasticsearch"
}
}
]

View File

@ -29,7 +29,7 @@ behavior can be a very expensive operation. For large result set
scrolling without sorting, the `scan` search type (explained below) is
also available.
ElasticSearch is very flexible and allows to control the type of search
Elasticsearch is very flexible and allows to control the type of search
to execute on a *per search request* basis. The type can be configured
by setting the *search_type* parameter in the query string. The types
are:

View File

@ -31,7 +31,7 @@ And here is a sample response:
"_source" : {
"user" : "kimchy",
"postDate" : "2009-11-15T14:12:12",
"message" : "trying out Elastic Search"
"message" : "trying out Elasticsearch"
}
}
]

View File

@ -9,7 +9,7 @@ without executing it. The following example shows how it can be used:
curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '{
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elastic Search"
"message" : "trying out Elasticsearch"
}'
--------------------------------------------------
@ -41,7 +41,7 @@ curl -XGET 'http://localhost:9200/twitter/tweet/_validate/query' -d '{
--------------------------------------------------
If the query is invalid, `valid` will be `false`. Here the query is
invalid because ElasticSearch knows the post_date field should be a date
invalid because Elasticsearch knows the post_date field should be a date
due to dynamic mapping, and 'foo' does not correctly parse into a date:
[source,js]
@ -66,7 +66,7 @@ curl -XGET 'http://localhost:9200/twitter/tweet/_validate/query?q=post_date:foo&
"explanations" : [ {
"index" : "twitter",
"valid" : false,
"error" : "org.elasticsearch.index.query.QueryParsingException: [twitter] Failed to parse; org.elasticsearch.ElasticSearchParseException: failed to parse date field [foo], tried both date format [dateOptionalTime], and timestamp number; java.lang.IllegalArgumentException: Invalid format: \"foo\""
"error" : "org.elasticsearch.index.query.QueryParsingException: [twitter] Failed to parse; org.elasticsearch.ElasticsearchParseException: failed to parse date field [foo], tried both date format [dateOptionalTime], and timestamp number; java.lang.IllegalArgumentException: Invalid format: \"foo\""
} ]
}
--------------------------------------------------

View File

@ -27,7 +27,7 @@ To run it in the background, add the `-d` switch to it:
$ bin/elasticsearch -d
--------------------------------------------------
ElasticSearch is built using Java, and requires at least
Elasticsearch is built using Java, and requires at least
http://java.sun.com/javase/downloads/index.jsp[Java 6] in order to run.
The version of Java that will be used can be set by setting the
`JAVA_HOME` environment variable.

View File

@ -4,7 +4,7 @@
[float]
=== Environment Variables
Within the scripts, ElasticSearch comes with built in `JAVA_OPTS` passed
Within the scripts, Elasticsearch comes with built in `JAVA_OPTS` passed
to the JVM started. The most important setting for that is the `-Xmx` to
control the maximum allowed memory for the process, and `-Xms` to
control the minimum allocated memory for the process (_in general, the
@ -74,9 +74,9 @@ memory is available on the machine).
*elasticsearch* configuration files can be found under `ES_HOME/config`
folder. The folder comes with two files, the `elasticsearch.yml` for
configuring ElasticSearch different
configuring Elasticsearch different
<<modules,modules>>, and `logging.yml` for
configuring the ElasticSearch logging.
configuring the Elasticsearch logging.
The configuration format is http://www.yaml.org/[YAML]. Here is an
example of changing the address all network based modules will use to
@ -121,7 +121,7 @@ cluster:
==== Node name
You may also want to change the default node name for each node to
something like the display hostname. By default ElasticSearch will
something like the display hostname. By default Elasticsearch will
randomly pick a Marvel character name from a list of around 3000 names
when your node starts up.
@ -231,7 +231,7 @@ All of the index level configuration can be found within each
[[logging]]
=== Logging
ElasticSearch uses an internal logging abstraction and comes, out of the
Elasticsearch uses an internal logging abstraction and comes, out of the
box, with http://logging.apache.org/log4j/[log4j]. It tries to simplify
log4j configuration by using http://www.yaml.org/[YAML] to configure it,
and the logging configuration file is `config/logging.yml` file.

View File

@ -8,7 +8,7 @@
<artifactId>elasticsearch</artifactId>
<version>1.0.0.RC1-SNAPSHOT</version>
<packaging>jar</packaging>
<description>ElasticSearch - Open Source, Distributed, RESTful Search Engine</description>
<description>Elasticsearch - Open Source, Distributed, RESTful Search Engine</description>
<inceptionYear>2009</inceptionYear>
<licenses>
<license>

View File

@ -7,7 +7,7 @@ Section: web
Priority: optional
Homepage: http://www.elasticsearch.org/
Description: Open Source, Distributed, RESTful Search Engine
ElasticSearch is a distributed RESTful search engine built for the cloud.
Elasticsearch is a distributed RESTful search engine built for the cloud.
.
Features include:
.

View File

@ -1,4 +1,4 @@
# Run ElasticSearch as this user ID and group ID
# Run Elasticsearch as this user ID and group ID
#ES_USER=elasticsearch
#ES_GROUP=elasticsearch
@ -22,19 +22,19 @@
# Maximum number of VMA (Virtual Memory Areas) a process can own
#MAX_MAP_COUNT=262144
# ElasticSearch log directory
# Elasticsearch log directory
#LOG_DIR=/var/log/elasticsearch
# ElasticSearch data directory
# Elasticsearch data directory
#DATA_DIR=/var/lib/elasticsearch
# ElasticSearch work directory
# Elasticsearch work directory
#WORK_DIR=/tmp/elasticsearch
# ElasticSearch configuration directory
# Elasticsearch configuration directory
#CONF_DIR=/etc/elasticsearch
# ElasticSearch configuration file (elasticsearch.yml)
# Elasticsearch configuration file (elasticsearch.yml)
#CONF_FILE=/etc/elasticsearch/elasticsearch.yml
# Additional Java OPTS

View File

@ -7,7 +7,7 @@
# Modified for Tomcat by Stefan Gybas <sgybas@debian.org>.
# Modified for Tomcat6 by Thierry Carrez <thierry.carrez@ubuntu.com>.
# Additional improvements by Jason Brittain <jason.brittain@mulesoft.com>.
# Modified by Nicolas Huray for ElasticSearch <nicolas.huray@gmail.com>.
# Modified by Nicolas Huray for Elasticsearch <nicolas.huray@gmail.com>.
#
### BEGIN INIT INFO
# Provides: elasticsearch
@ -21,7 +21,7 @@
PATH=/bin:/usr/bin:/sbin:/usr/sbin
NAME=elasticsearch
DESC="ElasticSearch Server"
DESC="Elasticsearch Server"
DEFAULT=/etc/default/$NAME
if [ `id -u` -ne 0 ]; then
@ -39,7 +39,7 @@ fi
# The following variables can be overwritten in $DEFAULT
# Run ElasticSearch as this user ID and group ID
# Run Elasticsearch as this user ID and group ID
ES_USER=elasticsearch
ES_GROUP=elasticsearch
@ -54,7 +54,7 @@ for jdir in $JDK_DIRS; do
done
export JAVA_HOME
# Directory where the ElasticSearch binary distribution resides
# Directory where the Elasticsearch binary distribution resides
ES_HOME=/usr/share/$NAME
# Heap Size (defaults to 256m min, 1g max)
@ -75,19 +75,19 @@ MAX_OPEN_FILES=65535
# Maximum amount of locked memory
#MAX_LOCKED_MEMORY=
# ElasticSearch log directory
# Elasticsearch log directory
LOG_DIR=/var/log/$NAME
# ElasticSearch data directory
# Elasticsearch data directory
DATA_DIR=/var/lib/$NAME
# ElasticSearch work directory
# Elasticsearch work directory
WORK_DIR=/tmp/$NAME
# ElasticSearch configuration directory
# Elasticsearch configuration directory
CONF_DIR=/etc/$NAME
# ElasticSearch configuration file (elasticsearch.yml)
# Elasticsearch configuration file (elasticsearch.yml)
CONF_FILE=$CONF_DIR/elasticsearch.yml
# Maximum number of VMA (Virtual Memory Areas) a process can own

View File

@ -19,7 +19,7 @@
package org.apache.lucene.store;
import org.apache.lucene.store.RateLimiter.SimpleRateLimiter;
import org.elasticsearch.ElasticSearchIllegalArgumentException;
import org.elasticsearch.ElasticsearchIllegalArgumentException;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.unit.ByteSizeValue;
@ -42,7 +42,7 @@ public class StoreRateLimiting {
MERGE,
ALL;
public static Type fromString(String type) throws ElasticSearchIllegalArgumentException {
public static Type fromString(String type) throws ElasticsearchIllegalArgumentException {
if ("none".equalsIgnoreCase(type)) {
return NONE;
} else if ("merge".equalsIgnoreCase(type)) {
@ -50,7 +50,7 @@ public class StoreRateLimiting {
} else if ("all".equalsIgnoreCase(type)) {
return ALL;
}
throw new ElasticSearchIllegalArgumentException("rate limiting type [" + type + "] not valid, can be one of [all|merge|none]");
throw new ElasticsearchIllegalArgumentException("rate limiting type [" + type + "] not valid, can be one of [all|merge|none]");
}
}
@ -88,7 +88,7 @@ public class StoreRateLimiting {
this.type = type;
}
public void setType(String type) throws ElasticSearchIllegalArgumentException {
public void setType(String type) throws ElasticsearchIllegalArgumentException {
this.type = Type.fromString(type);
}
}

View File

@ -24,25 +24,25 @@ import org.elasticsearch.rest.RestStatus;
/**
* A base class for all elasticsearch exceptions.
*/
public class ElasticSearchException extends RuntimeException {
public class ElasticsearchException extends RuntimeException {
/**
* Construct a <code>ElasticSearchException</code> with the specified detail message.
* Construct a <code>ElasticsearchException</code> with the specified detail message.
*
* @param msg the detail message
*/
public ElasticSearchException(String msg) {
public ElasticsearchException(String msg) {
super(msg);
}
/**
* Construct a <code>ElasticSearchException</code> with the specified detail message
* Construct a <code>ElasticsearchException</code> with the specified detail message
* and nested exception.
*
* @param msg the detail message
* @param cause the nested exception
*/
public ElasticSearchException(String msg, Throwable cause) {
public ElasticsearchException(String msg, Throwable cause) {
super(msg, cause);
}
@ -53,8 +53,8 @@ public class ElasticSearchException extends RuntimeException {
Throwable cause = unwrapCause();
if (cause == this) {
return RestStatus.INTERNAL_SERVER_ERROR;
} else if (cause instanceof ElasticSearchException) {
return ((ElasticSearchException) cause).status();
} else if (cause instanceof ElasticsearchException) {
return ((ElasticsearchException) cause).status();
} else if (cause instanceof IllegalArgumentException) {
return RestStatus.BAD_REQUEST;
} else {
@ -64,7 +64,7 @@ public class ElasticSearchException extends RuntimeException {
/**
* Unwraps the actual cause from the exception for cases when the exception is a
* {@link ElasticSearchWrapperException}.
* {@link ElasticsearchWrapperException}.
*
* @see org.elasticsearch.ExceptionsHelper#unwrapCause(Throwable)
*/
@ -80,8 +80,8 @@ public class ElasticSearchException extends RuntimeException {
if (getCause() != null) {
StringBuilder sb = new StringBuilder();
sb.append(toString()).append("; ");
if (getCause() instanceof ElasticSearchException) {
sb.append(((ElasticSearchException) getCause()).getDetailedMessage());
if (getCause() instanceof ElasticsearchException) {
sb.append(((ElasticsearchException) getCause()).getDetailedMessage());
} else {
sb.append(getCause());
}
@ -137,8 +137,8 @@ public class ElasticSearchException extends RuntimeException {
if (cause == this) {
return false;
}
if (cause instanceof ElasticSearchException) {
return ((ElasticSearchException) cause).contains(exType);
if (cause instanceof ElasticsearchException) {
return ((ElasticsearchException) cause).contains(exType);
} else {
while (cause != null) {
if (exType.isInstance(cause)) {

View File

@ -24,13 +24,13 @@ package org.elasticsearch;
*
*
*/
public class ElasticSearchGenerationException extends ElasticSearchException {
public class ElasticsearchGenerationException extends ElasticsearchException {
public ElasticSearchGenerationException(String msg) {
public ElasticsearchGenerationException(String msg) {
super(msg);
}
public ElasticSearchGenerationException(String msg, Throwable cause) {
public ElasticsearchGenerationException(String msg, Throwable cause) {
super(msg, cause);
}
}

View File

@ -24,17 +24,17 @@ import org.elasticsearch.rest.RestStatus;
/**
*
*/
public class ElasticSearchIllegalArgumentException extends ElasticSearchException {
public class ElasticsearchIllegalArgumentException extends ElasticsearchException {
public ElasticSearchIllegalArgumentException() {
public ElasticsearchIllegalArgumentException() {
super(null);
}
public ElasticSearchIllegalArgumentException(String msg) {
public ElasticsearchIllegalArgumentException(String msg) {
super(msg);
}
public ElasticSearchIllegalArgumentException(String msg, Throwable cause) {
public ElasticsearchIllegalArgumentException(String msg, Throwable cause) {
super(msg, cause);
}

View File

@ -22,17 +22,17 @@ package org.elasticsearch;
/**
*
*/
public class ElasticSearchIllegalStateException extends ElasticSearchException {
public class ElasticsearchIllegalStateException extends ElasticsearchException {
public ElasticSearchIllegalStateException() {
public ElasticsearchIllegalStateException() {
super(null);
}
public ElasticSearchIllegalStateException(String msg) {
public ElasticsearchIllegalStateException(String msg) {
super(msg);
}
public ElasticSearchIllegalStateException(String msg, Throwable cause) {
public ElasticsearchIllegalStateException(String msg, Throwable cause) {
super(msg, cause);
}
}

View File

@ -24,13 +24,13 @@ package org.elasticsearch;
*
*
*/
public class ElasticSearchInterruptedException extends ElasticSearchException {
public class ElasticsearchInterruptedException extends ElasticsearchException {
public ElasticSearchInterruptedException(String message) {
public ElasticsearchInterruptedException(String message) {
super(message);
}
public ElasticSearchInterruptedException(String message, Throwable cause) {
public ElasticsearchInterruptedException(String message, Throwable cause) {
super(message, cause);
}
}

View File

@ -22,17 +22,17 @@ package org.elasticsearch;
/**
*
*/
public class ElasticSearchNullPointerException extends ElasticSearchException {
public class ElasticsearchNullPointerException extends ElasticsearchException {
public ElasticSearchNullPointerException() {
public ElasticsearchNullPointerException() {
super(null);
}
public ElasticSearchNullPointerException(String msg) {
public ElasticsearchNullPointerException(String msg) {
super(msg);
}
public ElasticSearchNullPointerException(String msg, Throwable cause) {
public ElasticsearchNullPointerException(String msg, Throwable cause) {
super(msg, cause);
}
}

View File

@ -24,13 +24,13 @@ import org.elasticsearch.rest.RestStatus;
/**
*
*/
public class ElasticSearchParseException extends ElasticSearchException {
public class ElasticsearchParseException extends ElasticsearchException {
public ElasticSearchParseException(String msg) {
public ElasticsearchParseException(String msg) {
super(msg);
}
public ElasticSearchParseException(String msg, Throwable cause) {
public ElasticsearchParseException(String msg, Throwable cause) {
super(msg, cause);
}

View File

@ -24,13 +24,13 @@ package org.elasticsearch;
*
*
*/
public class ElasticSearchTimeoutException extends ElasticSearchException {
public class ElasticsearchTimeoutException extends ElasticsearchException {
public ElasticSearchTimeoutException(String message) {
public ElasticsearchTimeoutException(String message) {
super(message);
}
public ElasticSearchTimeoutException(String message, Throwable cause) {
public ElasticsearchTimeoutException(String message, Throwable cause) {
super(message, cause);
}
}

View File

@ -22,7 +22,7 @@ package org.elasticsearch;
/**
*
*/
public interface ElasticSearchWrapperException {
public interface ElasticsearchWrapperException {
Throwable getCause();
}

View File

@ -34,19 +34,19 @@ public final class ExceptionsHelper {
if (t instanceof RuntimeException) {
return (RuntimeException) t;
}
return new ElasticSearchException(t.getMessage(), t);
return new ElasticsearchException(t.getMessage(), t);
}
public static ElasticSearchException convertToElastic(Throwable t) {
if (t instanceof ElasticSearchException) {
return (ElasticSearchException) t;
public static ElasticsearchException convertToElastic(Throwable t) {
if (t instanceof ElasticsearchException) {
return (ElasticsearchException) t;
}
return new ElasticSearchException(t.getMessage(), t);
return new ElasticsearchException(t.getMessage(), t);
}
public static RestStatus status(Throwable t) {
if (t instanceof ElasticSearchException) {
return ((ElasticSearchException) t).status();
if (t instanceof ElasticsearchException) {
return ((ElasticsearchException) t).status();
}
return RestStatus.INTERNAL_SERVER_ERROR;
}
@ -54,7 +54,7 @@ public final class ExceptionsHelper {
public static Throwable unwrapCause(Throwable t) {
int counter = 0;
Throwable result = t;
while (result instanceof ElasticSearchWrapperException) {
while (result instanceof ElasticsearchWrapperException) {
if (result.getCause() == null) {
return result;
}

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.unit.TimeValue;
@ -35,29 +35,29 @@ public interface ActionFuture<T> extends Future<T> {
/**
* Similar to {@link #get()}, just wrapping the {@link InterruptedException} with
* {@link org.elasticsearch.ElasticSearchInterruptedException}, and throwing the actual
* {@link org.elasticsearch.ElasticsearchInterruptedException}, and throwing the actual
* cause of the {@link java.util.concurrent.ExecutionException}.
* <p/>
* <p>Note, the actual cause is unwrapped to the actual failure (for example, unwrapped
* from {@link org.elasticsearch.transport.RemoteTransportException}. The root failure is
* still accessible using {@link #getRootFailure()}.
*/
T actionGet() throws ElasticSearchException;
T actionGet() throws ElasticsearchException;
/**
* Similar to {@link #get(long, java.util.concurrent.TimeUnit)}, just wrapping the {@link InterruptedException} with
* {@link org.elasticsearch.ElasticSearchInterruptedException}, and throwing the actual
* {@link org.elasticsearch.ElasticsearchInterruptedException}, and throwing the actual
* cause of the {@link java.util.concurrent.ExecutionException}.
* <p/>
* <p>Note, the actual cause is unwrapped to the actual failure (for example, unwrapped
* from {@link org.elasticsearch.transport.RemoteTransportException}. The root failure is
* still accessible using {@link #getRootFailure()}.
*/
T actionGet(String timeout) throws ElasticSearchException;
T actionGet(String timeout) throws ElasticsearchException;
/**
* Similar to {@link #get(long, java.util.concurrent.TimeUnit)}, just wrapping the {@link InterruptedException} with
* {@link org.elasticsearch.ElasticSearchInterruptedException}, and throwing the actual
* {@link org.elasticsearch.ElasticsearchInterruptedException}, and throwing the actual
* cause of the {@link java.util.concurrent.ExecutionException}.
* <p/>
* <p>Note, the actual cause is unwrapped to the actual failure (for example, unwrapped
@ -66,29 +66,29 @@ public interface ActionFuture<T> extends Future<T> {
*
* @param timeoutMillis Timeout in millis
*/
T actionGet(long timeoutMillis) throws ElasticSearchException;
T actionGet(long timeoutMillis) throws ElasticsearchException;
/**
* Similar to {@link #get(long, java.util.concurrent.TimeUnit)}, just wrapping the {@link InterruptedException} with
* {@link org.elasticsearch.ElasticSearchInterruptedException}, and throwing the actual
* {@link org.elasticsearch.ElasticsearchInterruptedException}, and throwing the actual
* cause of the {@link java.util.concurrent.ExecutionException}.
* <p/>
* <p>Note, the actual cause is unwrapped to the actual failure (for example, unwrapped
* from {@link org.elasticsearch.transport.RemoteTransportException}. The root failure is
* still accessible using {@link #getRootFailure()}.
*/
T actionGet(long timeout, TimeUnit unit) throws ElasticSearchException;
T actionGet(long timeout, TimeUnit unit) throws ElasticsearchException;
/**
* Similar to {@link #get(long, java.util.concurrent.TimeUnit)}, just wrapping the {@link InterruptedException} with
* {@link org.elasticsearch.ElasticSearchInterruptedException}, and throwing the actual
* {@link org.elasticsearch.ElasticsearchInterruptedException}, and throwing the actual
* cause of the {@link java.util.concurrent.ExecutionException}.
* <p/>
* <p>Note, the actual cause is unwrapped to the actual failure (for example, unwrapped
* from {@link org.elasticsearch.transport.RemoteTransportException}. The root failure is
* still accessible using {@link #getRootFailure()}.
*/
T actionGet(TimeValue timeout) throws ElasticSearchException;
T actionGet(TimeValue timeout) throws ElasticsearchException;
/**
* The root (possibly) wrapped failure.

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.support.PlainListenableActionFuture;
import org.elasticsearch.client.internal.InternalGenericClient;
import org.elasticsearch.common.unit.TimeValue;
@ -63,21 +63,21 @@ public abstract class ActionRequestBuilder<Request extends ActionRequest, Respon
/**
* Short version of execute().actionGet().
*/
public Response get() throws ElasticSearchException {
public Response get() throws ElasticsearchException {
return execute().actionGet();
}
/**
* Short version of execute().actionGet().
*/
public Response get(TimeValue timeout) throws ElasticSearchException {
public Response get(TimeValue timeout) throws ElasticsearchException {
return execute().actionGet(timeout);
}
/**
* Short version of execute().actionGet().
*/
public Response get(String timeout) throws ElasticSearchException {
public Response get(String timeout) throws ElasticsearchException {
return execute().actionGet(timeout);
}

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import java.util.ArrayList;
import java.util.List;
@ -27,7 +27,7 @@ import java.util.List;
/**
*
*/
public class ActionRequestValidationException extends ElasticSearchException {
public class ActionRequestValidationException extends ElasticsearchException {
private final List<String> validationErrors = new ArrayList<String>();

View File

@ -19,12 +19,12 @@
package org.elasticsearch.action;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
/**
*
*/
public class FailedNodeException extends ElasticSearchException {
public class FailedNodeException extends ElasticsearchException {
private final String nodeId;

View File

@ -19,12 +19,12 @@
package org.elasticsearch.action;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
/**
*
*/
public class PrimaryMissingActionException extends ElasticSearchException {
public class PrimaryMissingActionException extends ElasticsearchException {
public PrimaryMissingActionException(String message) {
super(message);

View File

@ -19,13 +19,13 @@
package org.elasticsearch.action;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.rest.RestStatus;
/**
*
*/
public class RoutingMissingException extends ElasticSearchException {
public class RoutingMissingException extends ElasticsearchException {
private final String index;

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action;
import org.elasticsearch.ElasticSearchIllegalArgumentException;
import org.elasticsearch.ElasticsearchIllegalArgumentException;
/**
*
@ -108,7 +108,7 @@ public enum ThreadingModel {
} else if (id == 3) {
return OPERATION_LISTENER;
} else {
throw new ElasticSearchIllegalArgumentException("No threading model for [" + id + "]");
throw new ElasticsearchIllegalArgumentException("No threading model for [" + id + "]");
}
}
}

View File

@ -19,11 +19,11 @@
package org.elasticsearch.action;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
/**
*/
public class TimestampParsingException extends ElasticSearchException {
public class TimestampParsingException extends ElasticsearchException {
private final String timestamp;

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.support.PlainActionFuture;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.common.component.AbstractComponent;
@ -52,7 +52,7 @@ public class TransportActionNodeProxy<Request extends ActionRequest, Response ex
this.transportOptions = action.transportOptions(settings);
}
public ActionFuture<Response> execute(DiscoveryNode node, Request request) throws ElasticSearchException {
public ActionFuture<Response> execute(DiscoveryNode node, Request request) throws ElasticsearchException {
PlainActionFuture<Response> future = newFuture();
request.listenerThreaded(false);
execute(node, request, future);

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.index.shard.ShardId;
import org.elasticsearch.rest.RestStatus;
@ -27,7 +27,7 @@ import org.elasticsearch.rest.RestStatus;
/**
*
*/
public class UnavailableShardsException extends ElasticSearchException {
public class UnavailableShardsException extends ElasticsearchException {
public UnavailableShardsException(@Nullable ShardId shardId, String message) {
super(buildMessage(shardId, message));

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action;
import org.elasticsearch.ElasticSearchIllegalArgumentException;
import org.elasticsearch.ElasticsearchIllegalArgumentException;
/**
* Write Consistency Level control how many replicas should be active for a write operation to occur (a write operation
@ -53,7 +53,7 @@ public enum WriteConsistencyLevel {
} else if (value == 3) {
return ALL;
}
throw new ElasticSearchIllegalArgumentException("No write consistency match [" + value + "]");
throw new ElasticsearchIllegalArgumentException("No write consistency match [" + value + "]");
}
public static WriteConsistencyLevel fromString(String value) {
@ -66,6 +66,6 @@ public enum WriteConsistencyLevel {
} else if (value.equals("all")) {
return ALL;
}
throw new ElasticSearchIllegalArgumentException("No write consistency match [" + value + "]");
throw new ElasticsearchIllegalArgumentException("No write consistency match [" + value + "]");
}
}

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action.admin.cluster.health;
import org.elasticsearch.ElasticSearchIllegalArgumentException;
import org.elasticsearch.ElasticsearchIllegalArgumentException;
/**
*
@ -48,7 +48,7 @@ public enum ClusterHealthStatus {
case 2:
return RED;
default:
throw new ElasticSearchIllegalArgumentException("No cluster health status for value [" + value + "]");
throw new ElasticsearchIllegalArgumentException("No cluster health status for value [" + value + "]");
}
}
}

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action.admin.cluster.health;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.ClusterName;
@ -77,7 +77,7 @@ public class TransportClusterHealthAction extends TransportMasterNodeOperationAc
}
@Override
protected void masterOperation(final ClusterHealthRequest request, final ClusterState unusedState, final ActionListener<ClusterHealthResponse> listener) throws ElasticSearchException {
protected void masterOperation(final ClusterHealthRequest request, final ClusterState unusedState, final ActionListener<ClusterHealthResponse> listener) throws ElasticsearchException {
long endTime = System.currentTimeMillis() + request.timeout().millis();
if (request.waitForEvents() != null) {

View File

@ -20,7 +20,7 @@
package org.elasticsearch.action.admin.cluster.node.hotthreads;
import com.google.common.collect.Lists;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.support.nodes.NodeOperationRequest;
import org.elasticsearch.action.support.nodes.TransportNodesOperationAction;
import org.elasticsearch.cluster.ClusterName;
@ -91,7 +91,7 @@ public class TransportNodesHotThreadsAction extends TransportNodesOperationActio
}
@Override
protected NodeHotThreads nodeOperation(NodeRequest request) throws ElasticSearchException {
protected NodeHotThreads nodeOperation(NodeRequest request) throws ElasticsearchException {
HotThreads hotThreads = new HotThreads()
.busiestThreads(request.request.threads)
.type(request.request.type)
@ -100,7 +100,7 @@ public class TransportNodesHotThreadsAction extends TransportNodesOperationActio
try {
return new NodeHotThreads(clusterService.localNode(), hotThreads.detect());
} catch (Exception e) {
throw new ElasticSearchException("failed to detect hot threads", e);
throw new ElasticsearchException("failed to detect hot threads", e);
}
}

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action.admin.cluster.node.info;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.support.nodes.NodeOperationRequest;
import org.elasticsearch.action.support.nodes.TransportNodesOperationAction;
import org.elasticsearch.cluster.ClusterName;
@ -95,7 +95,7 @@ public class TransportNodesInfoAction extends TransportNodesOperationAction<Node
}
@Override
protected NodeInfo nodeOperation(NodeInfoRequest nodeRequest) throws ElasticSearchException {
protected NodeInfo nodeOperation(NodeInfoRequest nodeRequest) throws ElasticsearchException {
NodesInfoRequest request = nodeRequest.request;
return nodeService.info(request.settings(), request.os(), request.process(), request.jvm(), request.threadPool(),
request.network(), request.transport(), request.http(), request.plugin());

View File

@ -19,8 +19,8 @@
package org.elasticsearch.action.admin.cluster.node.restart;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticSearchIllegalStateException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.ElasticsearchIllegalStateException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.nodes.NodeOperationRequest;
import org.elasticsearch.action.support.nodes.TransportNodesOperationAction;
@ -65,7 +65,7 @@ public class TransportNodesRestartAction extends TransportNodesOperationAction<N
@Override
protected void doExecute(NodesRestartRequest nodesRestartRequest, ActionListener<NodesRestartResponse> listener) {
listener.onFailure(new ElasticSearchIllegalStateException("restart is disabled (for now) ...."));
listener.onFailure(new ElasticsearchIllegalStateException("restart is disabled (for now) ...."));
}
@Override
@ -111,9 +111,9 @@ public class TransportNodesRestartAction extends TransportNodesOperationAction<N
}
@Override
protected NodesRestartResponse.NodeRestartResponse nodeOperation(NodeRestartRequest request) throws ElasticSearchException {
protected NodesRestartResponse.NodeRestartResponse nodeOperation(NodeRestartRequest request) throws ElasticsearchException {
if (disabled) {
throw new ElasticSearchIllegalStateException("Restart is disabled");
throw new ElasticsearchIllegalStateException("Restart is disabled");
}
if (!restartRequested.compareAndSet(false, true)) {
return new NodesRestartResponse.NodeRestartResponse(clusterService.localNode());

View File

@ -21,8 +21,8 @@ package org.elasticsearch.action.admin.cluster.node.shutdown;
import com.carrotsearch.hppc.ObjectOpenHashSet;
import com.carrotsearch.hppc.cursors.ObjectCursor;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticSearchIllegalStateException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.ElasticsearchIllegalStateException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.ClusterName;
@ -97,9 +97,9 @@ public class TransportNodesShutdownAction extends TransportMasterNodeOperationAc
}
@Override
protected void masterOperation(final NodesShutdownRequest request, final ClusterState state, final ActionListener<NodesShutdownResponse> listener) throws ElasticSearchException {
protected void masterOperation(final NodesShutdownRequest request, final ClusterState state, final ActionListener<NodesShutdownResponse> listener) throws ElasticsearchException {
if (disabled) {
throw new ElasticSearchIllegalStateException("Shutdown is disabled");
throw new ElasticsearchIllegalStateException("Shutdown is disabled");
}
final ObjectOpenHashSet<DiscoveryNode> nodes = new ObjectOpenHashSet<DiscoveryNode>();
if (state.nodes().isAllNodes(request.nodesIds)) {
@ -240,7 +240,7 @@ public class TransportNodesShutdownAction extends TransportMasterNodeOperationAc
@Override
public void messageReceived(final NodeShutdownRequest request, TransportChannel channel) throws Exception {
if (disabled) {
throw new ElasticSearchIllegalStateException("Shutdown is disabled");
throw new ElasticsearchIllegalStateException("Shutdown is disabled");
}
logger.info("shutting down in [{}]", delay);
Thread t = new Thread(new Runnable() {

View File

@ -20,7 +20,7 @@
package org.elasticsearch.action.admin.cluster.node.stats;
import com.google.common.collect.Lists;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.support.nodes.NodeOperationRequest;
import org.elasticsearch.action.support.nodes.TransportNodesOperationAction;
import org.elasticsearch.cluster.ClusterName;
@ -95,7 +95,7 @@ public class TransportNodesStatsAction extends TransportNodesOperationAction<Nod
}
@Override
protected NodeStats nodeOperation(NodeStatsRequest nodeStatsRequest) throws ElasticSearchException {
protected NodeStats nodeOperation(NodeStatsRequest nodeStatsRequest) throws ElasticsearchException {
NodesStatsRequest request = nodeStatsRequest.request;
return nodeService.stats(request.indices(), request.os(), request.process(), request.jvm(), request.threadPool(), request.network(),
request.fs(), request.transport(), request.http(), request.breaker());

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action.admin.cluster.repositories.delete;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.ClusterService;
@ -72,7 +72,7 @@ public class TransportDeleteRepositoryAction extends TransportMasterNodeOperatio
}
@Override
protected void masterOperation(final DeleteRepositoryRequest request, ClusterState state, final ActionListener<DeleteRepositoryResponse> listener) throws ElasticSearchException {
protected void masterOperation(final DeleteRepositoryRequest request, ClusterState state, final ActionListener<DeleteRepositoryResponse> listener) throws ElasticsearchException {
repositoriesService.unregisterRepository(
new RepositoriesService.UnregisterRepositoryRequest("delete_repository [" + request.name() + "]", request.name())
.masterNodeTimeout(request.masterNodeTimeout()).ackTimeout(request.timeout()),

View File

@ -20,7 +20,7 @@
package org.elasticsearch.action.admin.cluster.repositories.get;
import com.google.common.collect.ImmutableList;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.ClusterService;
@ -73,7 +73,7 @@ public class TransportGetRepositoriesAction extends TransportMasterNodeOperation
}
@Override
protected void masterOperation(final GetRepositoriesRequest request, ClusterState state, final ActionListener<GetRepositoriesResponse> listener) throws ElasticSearchException {
protected void masterOperation(final GetRepositoriesRequest request, ClusterState state, final ActionListener<GetRepositoriesResponse> listener) throws ElasticsearchException {
MetaData metaData = state.metaData();
RepositoriesMetaData repositories = metaData.custom(RepositoriesMetaData.TYPE);
if (request.repositories().length == 0 || (request.repositories().length == 1 && "_all".equals(request.repositories()[0]))) {

View File

@ -19,8 +19,8 @@
package org.elasticsearch.action.admin.cluster.repositories.put;
import org.elasticsearch.ElasticSearchGenerationException;
import org.elasticsearch.ElasticSearchIllegalArgumentException;
import org.elasticsearch.ElasticsearchGenerationException;
import org.elasticsearch.ElasticsearchIllegalArgumentException;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.support.master.AcknowledgedRequest;
import org.elasticsearch.common.bytes.BytesReference;
@ -164,7 +164,7 @@ public class PutRepositoryRequest extends AcknowledgedRequest<PutRepositoryReque
builder.map(source);
settings(builder.string());
} catch (IOException e) {
throw new ElasticSearchGenerationException("Failed to generate [" + source + "]", e);
throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e);
}
return this;
}
@ -201,7 +201,7 @@ public class PutRepositoryRequest extends AcknowledgedRequest<PutRepositoryReque
type(entry.getValue().toString());
} else if (name.equals("settings")) {
if (!(entry.getValue() instanceof Map)) {
throw new ElasticSearchIllegalArgumentException("Malformed settings section, should include an inner object");
throw new ElasticsearchIllegalArgumentException("Malformed settings section, should include an inner object");
}
settings((Map<String, Object>) entry.getValue());
}
@ -219,7 +219,7 @@ public class PutRepositoryRequest extends AcknowledgedRequest<PutRepositoryReque
try {
return source(XContentFactory.xContent(repositoryDefinition).createParser(repositoryDefinition).mapOrderedAndClose());
} catch (IOException e) {
throw new ElasticSearchIllegalArgumentException("failed to parse repository source [" + repositoryDefinition + "]", e);
throw new ElasticsearchIllegalArgumentException("failed to parse repository source [" + repositoryDefinition + "]", e);
}
}
@ -243,7 +243,7 @@ public class PutRepositoryRequest extends AcknowledgedRequest<PutRepositoryReque
try {
return source(XContentFactory.xContent(repositoryDefinition, offset, length).createParser(repositoryDefinition, offset, length).mapOrderedAndClose());
} catch (IOException e) {
throw new ElasticSearchIllegalArgumentException("failed to parse repository source", e);
throw new ElasticsearchIllegalArgumentException("failed to parse repository source", e);
}
}
@ -257,7 +257,7 @@ public class PutRepositoryRequest extends AcknowledgedRequest<PutRepositoryReque
try {
return source(XContentFactory.xContent(repositoryDefinition).createParser(repositoryDefinition).mapOrderedAndClose());
} catch (IOException e) {
throw new ElasticSearchIllegalArgumentException("failed to parse template source", e);
throw new ElasticsearchIllegalArgumentException("failed to parse template source", e);
}
}

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action.admin.cluster.repositories.put;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.ClusterService;
@ -72,7 +72,7 @@ public class TransportPutRepositoryAction extends TransportMasterNodeOperationAc
}
@Override
protected void masterOperation(final PutRepositoryRequest request, ClusterState state, final ActionListener<PutRepositoryResponse> listener) throws ElasticSearchException {
protected void masterOperation(final PutRepositoryRequest request, ClusterState state, final ActionListener<PutRepositoryResponse> listener) throws ElasticsearchException {
repositoriesService.registerRepository(new RepositoriesService.RegisterRepositoryRequest("put_repository [" + request.name() + "]", request.name(), request.type())
.settings(request.settings())

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action.admin.cluster.reroute;
import org.elasticsearch.ElasticSearchParseException;
import org.elasticsearch.ElasticsearchParseException;
import org.elasticsearch.Version;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.support.master.AcknowledgedRequest;
@ -85,13 +85,13 @@ public class ClusterRerouteRequest extends AcknowledgedRequest<ClusterRerouteReq
if ("commands".equals(currentFieldName)) {
this.commands = AllocationCommands.fromXContent(parser);
} else {
throw new ElasticSearchParseException("failed to parse reroute request, got start array with wrong field name [" + currentFieldName + "]");
throw new ElasticsearchParseException("failed to parse reroute request, got start array with wrong field name [" + currentFieldName + "]");
}
} else if (token.isValue()) {
if ("dry_run".equals(currentFieldName) || "dryRun".equals(currentFieldName)) {
dryRun = parser.booleanValue();
} else {
throw new ElasticSearchParseException("failed to parse reroute request, got value with wrong field name [" + currentFieldName + "]");
throw new ElasticsearchParseException("failed to parse reroute request, got value with wrong field name [" + currentFieldName + "]");
}
}
}

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action.admin.cluster.reroute;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.AckedClusterStateUpdateTask;
@ -71,7 +71,7 @@ public class TransportClusterRerouteAction extends TransportMasterNodeOperationA
}
@Override
protected void masterOperation(final ClusterRerouteRequest request, final ClusterState state, final ActionListener<ClusterRerouteResponse> listener) throws ElasticSearchException {
protected void masterOperation(final ClusterRerouteRequest request, final ClusterState state, final ActionListener<ClusterRerouteResponse> listener) throws ElasticsearchException {
clusterService.submitStateUpdateTask("cluster_reroute (api)", Priority.URGENT, new AckedClusterStateUpdateTask() {
private volatile ClusterState clusterStateToSend;

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action.admin.cluster.settings;
import org.elasticsearch.ElasticSearchGenerationException;
import org.elasticsearch.ElasticsearchGenerationException;
import org.elasticsearch.Version;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.support.master.AcknowledgedRequest;
@ -101,7 +101,7 @@ public class ClusterUpdateSettingsRequest extends AcknowledgedRequest<ClusterUpd
builder.map(source);
transientSettings(builder.string());
} catch (IOException e) {
throw new ElasticSearchGenerationException("Failed to generate [" + source + "]", e);
throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e);
}
return this;
}
@ -140,7 +140,7 @@ public class ClusterUpdateSettingsRequest extends AcknowledgedRequest<ClusterUpd
builder.map(source);
persistentSettings(builder.string());
} catch (IOException e) {
throw new ElasticSearchGenerationException("Failed to generate [" + source + "]", e);
throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e);
}
return this;
}

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action.admin.cluster.settings;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.AckedClusterStateUpdateTask;
@ -83,7 +83,7 @@ public class TransportClusterUpdateSettingsAction extends TransportMasterNodeOpe
}
@Override
protected void masterOperation(final ClusterUpdateSettingsRequest request, final ClusterState state, final ActionListener<ClusterUpdateSettingsResponse> listener) throws ElasticSearchException {
protected void masterOperation(final ClusterUpdateSettingsRequest request, final ClusterState state, final ActionListener<ClusterUpdateSettingsResponse> listener) throws ElasticsearchException {
final ImmutableSettings.Builder transientUpdates = ImmutableSettings.settingsBuilder();
final ImmutableSettings.Builder persistentUpdates = ImmutableSettings.settingsBuilder();

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action.admin.cluster.shards;
import org.elasticsearch.ElasticSearchIllegalArgumentException;
import org.elasticsearch.ElasticsearchIllegalArgumentException;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.support.IndicesOptions;
import org.elasticsearch.action.support.master.MasterNodeOperationRequest;
@ -60,11 +60,11 @@ public class ClusterSearchShardsRequest extends MasterNodeOperationRequest<Clust
*/
public ClusterSearchShardsRequest indices(String... indices) {
if (indices == null) {
throw new ElasticSearchIllegalArgumentException("indices must not be null");
throw new ElasticsearchIllegalArgumentException("indices must not be null");
} else {
for (int i = 0; i < indices.length; i++) {
if (indices[i] == null) {
throw new ElasticSearchIllegalArgumentException("indices[" + i + "] must not be null");
throw new ElasticsearchIllegalArgumentException("indices[" + i + "] must not be null");
}
}
}

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action.admin.cluster.shards;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.ClusterService;
@ -74,7 +74,7 @@ public class TransportClusterSearchShardsAction extends TransportMasterNodeOpera
}
@Override
protected void masterOperation(final ClusterSearchShardsRequest request, final ClusterState state, final ActionListener<ClusterSearchShardsResponse> listener) throws ElasticSearchException {
protected void masterOperation(final ClusterSearchShardsRequest request, final ClusterState state, final ActionListener<ClusterSearchShardsResponse> listener) throws ElasticsearchException {
ClusterState clusterState = clusterService.state();
String[] concreteIndices = clusterState.metaData().concreteIndices(request.indices(), request.indicesOptions());
Map<String, Set<String>> routingMap = clusterState.metaData().resolveSearchRouting(request.routing(), request.indices());

View File

@ -19,8 +19,8 @@
package org.elasticsearch.action.admin.cluster.snapshots.create;
import org.elasticsearch.ElasticSearchGenerationException;
import org.elasticsearch.ElasticSearchIllegalArgumentException;
import org.elasticsearch.ElasticsearchGenerationException;
import org.elasticsearch.ElasticsearchIllegalArgumentException;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.support.IndicesOptions;
import org.elasticsearch.action.support.master.MasterNodeOperationRequest;
@ -288,7 +288,7 @@ public class CreateSnapshotRequest extends MasterNodeOperationRequest<CreateSnap
builder.map(source);
settings(builder.string());
} catch (IOException e) {
throw new ElasticSearchGenerationException("Failed to generate [" + source + "]", e);
throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e);
}
return this;
}
@ -350,7 +350,7 @@ public class CreateSnapshotRequest extends MasterNodeOperationRequest<CreateSnap
} else if (entry.getValue() instanceof ArrayList) {
indices((ArrayList<String>) entry.getValue());
} else {
throw new ElasticSearchIllegalArgumentException("malformed indices section, should be an array of strings");
throw new ElasticsearchIllegalArgumentException("malformed indices section, should be an array of strings");
}
} else if (name.equals("ignore_unavailable") || name.equals("ignoreUnavailable")) {
assert entry.getValue() instanceof String;
@ -366,12 +366,12 @@ public class CreateSnapshotRequest extends MasterNodeOperationRequest<CreateSnap
expandWildcardsClosed = Boolean.valueOf(entry.getValue().toString());
} else if (name.equals("settings")) {
if (!(entry.getValue() instanceof Map)) {
throw new ElasticSearchIllegalArgumentException("malformed settings section, should indices an inner object");
throw new ElasticsearchIllegalArgumentException("malformed settings section, should indices an inner object");
}
settings((Map<String, Object>) entry.getValue());
} else if (name.equals("include_global_state")) {
if (!(entry.getValue() instanceof Boolean)) {
throw new ElasticSearchIllegalArgumentException("malformed include_global_state, should be boolean");
throw new ElasticsearchIllegalArgumentException("malformed include_global_state, should be boolean");
}
includeGlobalState((Boolean) entry.getValue());
}
@ -391,7 +391,7 @@ public class CreateSnapshotRequest extends MasterNodeOperationRequest<CreateSnap
try {
return source(XContentFactory.xContent(source).createParser(source).mapOrderedAndClose());
} catch (Exception e) {
throw new ElasticSearchIllegalArgumentException("failed to parse repository source [" + source + "]", e);
throw new ElasticsearchIllegalArgumentException("failed to parse repository source [" + source + "]", e);
}
}
return this;
@ -420,7 +420,7 @@ public class CreateSnapshotRequest extends MasterNodeOperationRequest<CreateSnap
try {
return source(XContentFactory.xContent(source, offset, length).createParser(source, offset, length).mapOrderedAndClose());
} catch (IOException e) {
throw new ElasticSearchIllegalArgumentException("failed to parse repository source", e);
throw new ElasticsearchIllegalArgumentException("failed to parse repository source", e);
}
}
return this;
@ -436,7 +436,7 @@ public class CreateSnapshotRequest extends MasterNodeOperationRequest<CreateSnap
try {
return source(XContentFactory.xContent(source).createParser(source).mapOrderedAndClose());
} catch (IOException e) {
throw new ElasticSearchIllegalArgumentException("failed to parse snapshot source", e);
throw new ElasticsearchIllegalArgumentException("failed to parse snapshot source", e);
}
}

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action.admin.cluster.snapshots.create;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.ClusterService;
@ -73,7 +73,7 @@ public class TransportCreateSnapshotAction extends TransportMasterNodeOperationA
}
@Override
protected void masterOperation(final CreateSnapshotRequest request, ClusterState state, final ActionListener<CreateSnapshotResponse> listener) throws ElasticSearchException {
protected void masterOperation(final CreateSnapshotRequest request, ClusterState state, final ActionListener<CreateSnapshotResponse> listener) throws ElasticsearchException {
SnapshotsService.SnapshotRequest snapshotRequest =
new SnapshotsService.SnapshotRequest("create_snapshot[" + request.snapshot() + "]", request.snapshot(), request.repository())
.indices(request.indices())

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action.admin.cluster.snapshots.delete;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.ClusterService;
@ -72,7 +72,7 @@ public class TransportDeleteSnapshotAction extends TransportMasterNodeOperationA
}
@Override
protected void masterOperation(final DeleteSnapshotRequest request, ClusterState state, final ActionListener<DeleteSnapshotResponse> listener) throws ElasticSearchException {
protected void masterOperation(final DeleteSnapshotRequest request, ClusterState state, final ActionListener<DeleteSnapshotResponse> listener) throws ElasticsearchException {
SnapshotId snapshotIds = new SnapshotId(request.repository(), request.snapshot());
snapshotsService.deleteSnapshot(snapshotIds, new SnapshotsService.DeleteSnapshotListener() {
@Override

View File

@ -20,7 +20,7 @@
package org.elasticsearch.action.admin.cluster.snapshots.get;
import com.google.common.collect.ImmutableList;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.ClusterService;
@ -75,7 +75,7 @@ public class TransportGetSnapshotsAction extends TransportMasterNodeOperationAct
}
@Override
protected void masterOperation(final GetSnapshotsRequest request, ClusterState state, final ActionListener<GetSnapshotsResponse> listener) throws ElasticSearchException {
protected void masterOperation(final GetSnapshotsRequest request, ClusterState state, final ActionListener<GetSnapshotsResponse> listener) throws ElasticsearchException {
SnapshotId[] snapshotIds = new SnapshotId[request.snapshots().length];
for (int i = 0; i < snapshotIds.length; i++) {
snapshotIds[i] = new SnapshotId(request.repository(), request.snapshots()[i]);

View File

@ -19,8 +19,8 @@
package org.elasticsearch.action.admin.cluster.snapshots.restore;
import org.elasticsearch.ElasticSearchGenerationException;
import org.elasticsearch.ElasticSearchIllegalArgumentException;
import org.elasticsearch.ElasticsearchGenerationException;
import org.elasticsearch.ElasticsearchIllegalArgumentException;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.support.IndicesOptions;
import org.elasticsearch.action.support.master.MasterNodeOperationRequest;
@ -324,7 +324,7 @@ public class RestoreSnapshotRequest extends MasterNodeOperationRequest<RestoreSn
builder.map(source);
settings(builder.string());
} catch (IOException e) {
throw new ElasticSearchGenerationException("Failed to generate [" + source + "]", e);
throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e);
}
return this;
}
@ -370,7 +370,7 @@ public class RestoreSnapshotRequest extends MasterNodeOperationRequest<RestoreSn
try {
return source(source.bytes());
} catch (Exception e) {
throw new ElasticSearchIllegalArgumentException("Failed to build json for repository request", e);
throw new ElasticsearchIllegalArgumentException("Failed to build json for repository request", e);
}
}
@ -394,7 +394,7 @@ public class RestoreSnapshotRequest extends MasterNodeOperationRequest<RestoreSn
} else if (entry.getValue() instanceof ArrayList) {
indices((ArrayList<String>) entry.getValue());
} else {
throw new ElasticSearchIllegalArgumentException("malformed indices section, should be an array of strings");
throw new ElasticsearchIllegalArgumentException("malformed indices section, should be an array of strings");
}
} else if (name.equals("ignore_unavailable") || name.equals("ignoreUnavailable")) {
assert entry.getValue() instanceof String;
@ -410,28 +410,28 @@ public class RestoreSnapshotRequest extends MasterNodeOperationRequest<RestoreSn
expandWildcardsClosed = Boolean.valueOf(entry.getValue().toString());
} else if (name.equals("settings")) {
if (!(entry.getValue() instanceof Map)) {
throw new ElasticSearchIllegalArgumentException("malformed settings section, should indices an inner object");
throw new ElasticsearchIllegalArgumentException("malformed settings section, should indices an inner object");
}
settings((Map<String, Object>) entry.getValue());
} else if (name.equals("include_global_state")) {
if (!(entry.getValue() instanceof Boolean)) {
throw new ElasticSearchIllegalArgumentException("malformed include_global_state, should be boolean");
throw new ElasticsearchIllegalArgumentException("malformed include_global_state, should be boolean");
}
includeGlobalState((Boolean) entry.getValue());
} else if (name.equals("rename_pattern")) {
if (entry.getValue() instanceof String) {
renamePattern((String) entry.getValue());
} else {
throw new ElasticSearchIllegalArgumentException("malformed rename_pattern");
throw new ElasticsearchIllegalArgumentException("malformed rename_pattern");
}
} else if (name.equals("rename_replacement")) {
if (entry.getValue() instanceof String) {
renameReplacement((String) entry.getValue());
} else {
throw new ElasticSearchIllegalArgumentException("malformed rename_replacement");
throw new ElasticsearchIllegalArgumentException("malformed rename_replacement");
}
} else {
throw new ElasticSearchIllegalArgumentException("Unknown parameter " + name);
throw new ElasticsearchIllegalArgumentException("Unknown parameter " + name);
}
}
indicesOptions(IndicesOptions.fromOptions(ignoreUnavailable, allowNoIndices, expandWildcardsOpen, expandWildcardsClosed));
@ -451,7 +451,7 @@ public class RestoreSnapshotRequest extends MasterNodeOperationRequest<RestoreSn
try {
return source(XContentFactory.xContent(source).createParser(source).mapOrderedAndClose());
} catch (Exception e) {
throw new ElasticSearchIllegalArgumentException("failed to parse repository source [" + source + "]", e);
throw new ElasticsearchIllegalArgumentException("failed to parse repository source [" + source + "]", e);
}
}
return this;
@ -484,7 +484,7 @@ public class RestoreSnapshotRequest extends MasterNodeOperationRequest<RestoreSn
try {
return source(XContentFactory.xContent(source, offset, length).createParser(source, offset, length).mapOrderedAndClose());
} catch (IOException e) {
throw new ElasticSearchIllegalArgumentException("failed to parse repository source", e);
throw new ElasticsearchIllegalArgumentException("failed to parse repository source", e);
}
}
return this;
@ -502,7 +502,7 @@ public class RestoreSnapshotRequest extends MasterNodeOperationRequest<RestoreSn
try {
return source(XContentFactory.xContent(source).createParser(source).mapOrderedAndClose());
} catch (IOException e) {
throw new ElasticSearchIllegalArgumentException("failed to parse template source", e);
throw new ElasticsearchIllegalArgumentException("failed to parse template source", e);
}
}

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action.admin.cluster.snapshots.restore;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.ClusterService;
@ -74,7 +74,7 @@ public class TransportRestoreSnapshotAction extends TransportMasterNodeOperation
}
@Override
protected void masterOperation(final RestoreSnapshotRequest request, ClusterState state, final ActionListener<RestoreSnapshotResponse> listener) throws ElasticSearchException {
protected void masterOperation(final RestoreSnapshotRequest request, ClusterState state, final ActionListener<RestoreSnapshotResponse> listener) throws ElasticsearchException {
RestoreService.RestoreRequest restoreRequest =
new RestoreService.RestoreRequest("restore_snapshot[" + request.snapshot() + "]", request.repository(), request.snapshot())
.indices(request.indices())

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action.admin.cluster.state;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.ClusterName;
@ -74,7 +74,7 @@ public class TransportClusterStateAction extends TransportMasterNodeOperationAct
}
@Override
protected void masterOperation(final ClusterStateRequest request, final ClusterState state, ActionListener<ClusterStateResponse> listener) throws ElasticSearchException {
protected void masterOperation(final ClusterStateRequest request, final ClusterState state, ActionListener<ClusterStateResponse> listener) throws ElasticsearchException {
ClusterState currentState = clusterService.state();
logger.trace("Serving cluster state request using version {}", currentState.version());
ClusterState.Builder builder = ClusterState.builder();

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action.admin.cluster.stats;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.admin.cluster.health.ClusterHealthStatus;
import org.elasticsearch.action.admin.cluster.health.ClusterIndexHealth;
import org.elasticsearch.action.admin.cluster.node.info.NodeInfo;
@ -115,7 +115,7 @@ public class TransportClusterStatsAction extends TransportNodesOperationAction<C
}
@Override
protected ClusterStatsNodeResponse nodeOperation(ClusterStatsNodeRequest nodeRequest) throws ElasticSearchException {
protected ClusterStatsNodeResponse nodeOperation(ClusterStatsNodeRequest nodeRequest) throws ElasticsearchException {
NodeInfo nodeInfo = nodeService.info(false, true, false, true, false, false, true, false, true);
NodeStats nodeStats = nodeService.stats(CommonStatsFlags.NONE, false, true, true, false, false, true, false, false, false);
List<ShardStats> shardsStats = new ArrayList<ShardStats>();

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action.admin.cluster.tasks;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.ClusterService;
@ -63,7 +63,7 @@ public class TransportPendingClusterTasksAction extends TransportMasterNodeOpera
}
@Override
protected void masterOperation(PendingClusterTasksRequest request, ClusterState state, ActionListener<PendingClusterTasksResponse> listener) throws ElasticSearchException {
protected void masterOperation(PendingClusterTasksRequest request, ClusterState state, ActionListener<PendingClusterTasksResponse> listener) throws ElasticsearchException {
listener.onResponse(new PendingClusterTasksResponse(clusterService.pendingTasks()));
}
}

View File

@ -20,7 +20,7 @@
package org.elasticsearch.action.admin.indices.alias;
import com.google.common.collect.Lists;
import org.elasticsearch.ElasticSearchGenerationException;
import org.elasticsearch.ElasticsearchGenerationException;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.support.master.AcknowledgedRequest;
import org.elasticsearch.cluster.metadata.AliasAction;
@ -93,7 +93,7 @@ public class IndicesAliasesRequest extends AcknowledgedRequest<IndicesAliasesReq
aliasActions.add(new AliasAction(AliasAction.Type.ADD, index, alias, builder.string()));
return this;
} catch (IOException e) {
throw new ElasticSearchGenerationException("Failed to generate [" + filter + "]", e);
throw new ElasticsearchGenerationException("Failed to generate [" + filter + "]", e);
}
}
@ -115,7 +115,7 @@ public class IndicesAliasesRequest extends AcknowledgedRequest<IndicesAliasesReq
builder.close();
return addAlias(index, alias, builder.string());
} catch (IOException e) {
throw new ElasticSearchGenerationException("Failed to build json for alias request", e);
throw new ElasticsearchGenerationException("Failed to build json for alias request", e);
}
}

View File

@ -20,7 +20,7 @@
package org.elasticsearch.action.admin.indices.alias;
import com.google.common.collect.Sets;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.ClusterService;
@ -83,7 +83,7 @@ public class TransportIndicesAliasesAction extends TransportMasterNodeOperationA
}
@Override
protected void masterOperation(final IndicesAliasesRequest request, final ClusterState state, final ActionListener<IndicesAliasesResponse> listener) throws ElasticSearchException {
protected void masterOperation(final IndicesAliasesRequest request, final ClusterState state, final ActionListener<IndicesAliasesResponse> listener) throws ElasticsearchException {
IndicesAliasesClusterStateUpdateRequest updateRequest = new IndicesAliasesClusterStateUpdateRequest()
.ackTimeout(request.timeout()).masterNodeTimeout(request.masterNodeTimeout())

View File

@ -18,7 +18,7 @@
package org.elasticsearch.action.admin.indices.alias.exists;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.admin.indices.alias.get.GetAliasesRequest;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
@ -60,7 +60,7 @@ public class TransportAliasesExistAction extends TransportMasterNodeOperationAct
}
@Override
protected void masterOperation(GetAliasesRequest request, ClusterState state, ActionListener<AliasesExistResponse> listener) throws ElasticSearchException {
protected void masterOperation(GetAliasesRequest request, ClusterState state, ActionListener<AliasesExistResponse> listener) throws ElasticsearchException {
String[] concreteIndices = state.metaData().concreteIndices(request.indices(), request.indicesOptions());
request.indices(concreteIndices);

View File

@ -18,7 +18,7 @@
package org.elasticsearch.action.admin.indices.alias.get;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.ClusterService;
@ -63,7 +63,7 @@ public class TransportGetAliasesAction extends TransportMasterNodeOperationActio
}
@Override
protected void masterOperation(GetAliasesRequest request, ClusterState state, ActionListener<GetAliasesResponse> listener) throws ElasticSearchException {
protected void masterOperation(GetAliasesRequest request, ClusterState state, ActionListener<GetAliasesResponse> listener) throws ElasticsearchException {
String[] concreteIndices = state.metaData().concreteIndices(request.indices(), request.indicesOptions());
request.indices(concreteIndices);

View File

@ -26,8 +26,8 @@ import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;
import org.apache.lucene.analysis.tokenattributes.OffsetAttribute;
import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute;
import org.apache.lucene.analysis.tokenattributes.TypeAttribute;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticSearchIllegalArgumentException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.ElasticsearchIllegalArgumentException;
import org.elasticsearch.action.support.single.custom.TransportSingleCustomOperationAction;
import org.elasticsearch.cluster.ClusterService;
import org.elasticsearch.cluster.ClusterState;
@ -111,7 +111,7 @@ public class TransportAnalyzeAction extends TransportSingleCustomOperationAction
}
@Override
protected AnalyzeResponse shardOperation(AnalyzeRequest request, int shardId) throws ElasticSearchException {
protected AnalyzeResponse shardOperation(AnalyzeRequest request, int shardId) throws ElasticsearchException {
IndexService indexService = null;
if (request.index() != null) {
indexService = indicesService.indexServiceSafe(request.index());
@ -121,12 +121,12 @@ public class TransportAnalyzeAction extends TransportSingleCustomOperationAction
String field = null;
if (request.field() != null) {
if (indexService == null) {
throw new ElasticSearchIllegalArgumentException("No index provided, and trying to analyzer based on a specific field which requires the index parameter");
throw new ElasticsearchIllegalArgumentException("No index provided, and trying to analyzer based on a specific field which requires the index parameter");
}
FieldMapper<?> fieldMapper = indexService.mapperService().smartNameFieldMapper(request.field());
if (fieldMapper != null) {
if (fieldMapper.isNumeric()) {
throw new ElasticSearchIllegalArgumentException("Can't process field [" + request.field() + "], Analysis requests are not supported on numeric fields");
throw new ElasticsearchIllegalArgumentException("Can't process field [" + request.field() + "], Analysis requests are not supported on numeric fields");
}
analyzer = fieldMapper.indexAnalyzer();
field = fieldMapper.names().indexName();
@ -147,20 +147,20 @@ public class TransportAnalyzeAction extends TransportSingleCustomOperationAction
analyzer = indexService.analysisService().analyzer(request.analyzer());
}
if (analyzer == null) {
throw new ElasticSearchIllegalArgumentException("failed to find analyzer [" + request.analyzer() + "]");
throw new ElasticsearchIllegalArgumentException("failed to find analyzer [" + request.analyzer() + "]");
}
} else if (request.tokenizer() != null) {
TokenizerFactory tokenizerFactory;
if (indexService == null) {
TokenizerFactoryFactory tokenizerFactoryFactory = indicesAnalysisService.tokenizerFactoryFactory(request.tokenizer());
if (tokenizerFactoryFactory == null) {
throw new ElasticSearchIllegalArgumentException("failed to find global tokenizer under [" + request.tokenizer() + "]");
throw new ElasticsearchIllegalArgumentException("failed to find global tokenizer under [" + request.tokenizer() + "]");
}
tokenizerFactory = tokenizerFactoryFactory.create(request.tokenizer(), ImmutableSettings.Builder.EMPTY_SETTINGS);
} else {
tokenizerFactory = indexService.analysisService().tokenizer(request.tokenizer());
if (tokenizerFactory == null) {
throw new ElasticSearchIllegalArgumentException("failed to find tokenizer under [" + request.tokenizer() + "]");
throw new ElasticsearchIllegalArgumentException("failed to find tokenizer under [" + request.tokenizer() + "]");
}
}
TokenFilterFactory[] tokenFilterFactories = new TokenFilterFactory[0];
@ -171,17 +171,17 @@ public class TransportAnalyzeAction extends TransportSingleCustomOperationAction
if (indexService == null) {
TokenFilterFactoryFactory tokenFilterFactoryFactory = indicesAnalysisService.tokenFilterFactoryFactory(tokenFilterName);
if (tokenFilterFactoryFactory == null) {
throw new ElasticSearchIllegalArgumentException("failed to find global token filter under [" + request.tokenizer() + "]");
throw new ElasticsearchIllegalArgumentException("failed to find global token filter under [" + request.tokenizer() + "]");
}
tokenFilterFactories[i] = tokenFilterFactoryFactory.create(tokenFilterName, ImmutableSettings.Builder.EMPTY_SETTINGS);
} else {
tokenFilterFactories[i] = indexService.analysisService().tokenFilter(tokenFilterName);
if (tokenFilterFactories[i] == null) {
throw new ElasticSearchIllegalArgumentException("failed to find token filter under [" + request.tokenizer() + "]");
throw new ElasticsearchIllegalArgumentException("failed to find token filter under [" + request.tokenizer() + "]");
}
}
if (tokenFilterFactories[i] == null) {
throw new ElasticSearchIllegalArgumentException("failed to find token filter under [" + request.tokenizer() + "]");
throw new ElasticsearchIllegalArgumentException("failed to find token filter under [" + request.tokenizer() + "]");
}
}
}
@ -195,7 +195,7 @@ public class TransportAnalyzeAction extends TransportSingleCustomOperationAction
}
}
if (analyzer == null) {
throw new ElasticSearchIllegalArgumentException("failed to find analyzer");
throw new ElasticsearchIllegalArgumentException("failed to find analyzer");
}
List<AnalyzeResponse.AnalyzeToken> tokens = Lists.newArrayList();
@ -218,7 +218,7 @@ public class TransportAnalyzeAction extends TransportSingleCustomOperationAction
}
stream.end();
} catch (IOException e) {
throw new ElasticSearchException("failed to analyze", e);
throw new ElasticsearchException("failed to analyze", e);
} finally {
if (stream != null) {
try {

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action.admin.indices.cache.clear;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ShardOperationFailedException;
import org.elasticsearch.action.support.DefaultShardOperationFailedException;
import org.elasticsearch.action.support.broadcast.BroadcastShardOperationFailedException;
@ -116,7 +116,7 @@ public class TransportClearIndicesCacheAction extends TransportBroadcastOperatio
}
@Override
protected ShardClearIndicesCacheResponse shardOperation(ShardClearIndicesCacheRequest request) throws ElasticSearchException {
protected ShardClearIndicesCacheResponse shardOperation(ShardClearIndicesCacheRequest request) throws ElasticsearchException {
IndexService service = indicesService.indexService(request.index());
if (service != null) {
// we always clear the query cache

View File

@ -19,8 +19,8 @@
package org.elasticsearch.action.admin.indices.close;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticSearchIllegalArgumentException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.ElasticsearchIllegalArgumentException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.ClusterService;
@ -82,7 +82,7 @@ public class TransportCloseIndexAction extends TransportMasterNodeOperationActio
if (disableCloseAllIndices) {
if (state.metaData().isExplicitAllIndices(indicesOrAliases) ||
state.metaData().isPatternMatchingAllIndices(indicesOrAliases, request.indices())) {
throw new ElasticSearchIllegalArgumentException("closing all indices is disabled");
throw new ElasticsearchIllegalArgumentException("closing all indices is disabled");
}
}
@ -95,7 +95,7 @@ public class TransportCloseIndexAction extends TransportMasterNodeOperationActio
}
@Override
protected void masterOperation(final CloseIndexRequest request, final ClusterState state, final ActionListener<CloseIndexResponse> listener) throws ElasticSearchException {
protected void masterOperation(final CloseIndexRequest request, final ClusterState state, final ActionListener<CloseIndexResponse> listener) throws ElasticsearchException {
CloseIndexClusterStateUpdateRequest updateRequest = new CloseIndexClusterStateUpdateRequest()
.ackTimeout(request.timeout()).masterNodeTimeout(request.masterNodeTimeout())

View File

@ -20,9 +20,9 @@
package org.elasticsearch.action.admin.indices.create;
import com.google.common.base.Charsets;
import org.elasticsearch.ElasticSearchGenerationException;
import org.elasticsearch.ElasticSearchIllegalArgumentException;
import org.elasticsearch.ElasticSearchParseException;
import org.elasticsearch.ElasticsearchGenerationException;
import org.elasticsearch.ElasticsearchIllegalArgumentException;
import org.elasticsearch.ElasticsearchParseException;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.admin.indices.mapping.put.PutMappingRequest;
import org.elasticsearch.action.support.master.AcknowledgedRequest;
@ -160,7 +160,7 @@ public class CreateIndexRequest extends AcknowledgedRequest<CreateIndexRequest>
try {
settings(builder.string());
} catch (IOException e) {
throw new ElasticSearchGenerationException("Failed to generate json settings from builder", e);
throw new ElasticsearchGenerationException("Failed to generate json settings from builder", e);
}
return this;
}
@ -175,7 +175,7 @@ public class CreateIndexRequest extends AcknowledgedRequest<CreateIndexRequest>
builder.map(source);
settings(builder.string());
} catch (IOException e) {
throw new ElasticSearchGenerationException("Failed to generate [" + source + "]", e);
throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e);
}
return this;
}
@ -209,7 +209,7 @@ public class CreateIndexRequest extends AcknowledgedRequest<CreateIndexRequest>
try {
mappings.put(type, source.string());
} catch (IOException e) {
throw new ElasticSearchIllegalArgumentException("Failed to build json for mapping request", e);
throw new ElasticsearchIllegalArgumentException("Failed to build json for mapping request", e);
}
return this;
}
@ -231,7 +231,7 @@ public class CreateIndexRequest extends AcknowledgedRequest<CreateIndexRequest>
builder.map(source);
return mapping(type, builder.string());
} catch (IOException e) {
throw new ElasticSearchGenerationException("Failed to generate [" + source + "]", e);
throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e);
}
}
@ -278,7 +278,7 @@ public class CreateIndexRequest extends AcknowledgedRequest<CreateIndexRequest>
try {
source(XContentFactory.xContent(xContentType).createParser(source).mapAndClose());
} catch (IOException e) {
throw new ElasticSearchParseException("failed to parse source for create index", e);
throw new ElasticsearchParseException("failed to parse source for create index", e);
}
} else {
settings(new String(source.toBytes(), Charsets.UTF_8));
@ -311,7 +311,7 @@ public class CreateIndexRequest extends AcknowledgedRequest<CreateIndexRequest>
try {
customs.put(name, factory.fromMap((Map<String, Object>) entry.getValue()));
} catch (IOException e) {
throw new ElasticSearchParseException("failed to parse custom metadata for [" + name + "]");
throw new ElasticsearchParseException("failed to parse custom metadata for [" + name + "]");
}
}
}

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action.admin.indices.create;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.ClusterService;
@ -76,7 +76,7 @@ public class TransportCreateIndexAction extends TransportMasterNodeOperationActi
}
@Override
protected void masterOperation(final CreateIndexRequest request, final ClusterState state, final ActionListener<CreateIndexResponse> listener) throws ElasticSearchException {
protected void masterOperation(final CreateIndexRequest request, final ClusterState state, final ActionListener<CreateIndexResponse> listener) throws ElasticsearchException {
String cause = request.cause();
if (cause.length() == 0) {
cause = "api";

View File

@ -19,8 +19,8 @@
package org.elasticsearch.action.admin.indices.delete;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticSearchIllegalArgumentException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.ElasticsearchIllegalArgumentException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.admin.indices.mapping.delete.TransportDeleteMappingAction;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
@ -86,7 +86,7 @@ public class TransportDeleteIndexAction extends TransportMasterNodeOperationActi
if (disableDeleteAllIndices) {
if (state.metaData().isAllIndices(indicesOrAliases) ||
state.metaData().isPatternMatchingAllIndices(indicesOrAliases, request.indices())) {
throw new ElasticSearchIllegalArgumentException("deleting all indices is disabled");
throw new ElasticsearchIllegalArgumentException("deleting all indices is disabled");
}
}
super.doExecute(request, listener);
@ -98,7 +98,7 @@ public class TransportDeleteIndexAction extends TransportMasterNodeOperationActi
}
@Override
protected void masterOperation(final DeleteIndexRequest request, final ClusterState state, final ActionListener<DeleteIndexResponse> listener) throws ElasticSearchException {
protected void masterOperation(final DeleteIndexRequest request, final ClusterState state, final ActionListener<DeleteIndexResponse> listener) throws ElasticsearchException {
if (request.indices().length == 0) {
listener.onResponse(new DeleteIndexResponse(true));
return;

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action.admin.indices.exists.indices;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.ClusterService;
@ -77,7 +77,7 @@ public class TransportIndicesExistsAction extends TransportMasterNodeOperationAc
}
@Override
protected void masterOperation(final IndicesExistsRequest request, final ClusterState state, final ActionListener<IndicesExistsResponse> listener) throws ElasticSearchException {
protected void masterOperation(final IndicesExistsRequest request, final ClusterState state, final ActionListener<IndicesExistsResponse> listener) throws ElasticsearchException {
boolean exists;
try {
// Similar as the previous behaviour, but now also aliases and wildcards are supported.

View File

@ -19,7 +19,7 @@
package org.elasticsearch.action.admin.indices.exists.types;
import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;
import org.elasticsearch.cluster.ClusterService;
@ -71,7 +71,7 @@ public class TransportTypesExistsAction extends TransportMasterNodeOperationActi
}
@Override
protected void masterOperation(final TypesExistsRequest request, final ClusterState state, final ActionListener<TypesExistsResponse> listener) throws ElasticSearchException {
protected void masterOperation(final TypesExistsRequest request, final ClusterState state, final ActionListener<TypesExistsResponse> listener) throws ElasticsearchException {
String[] concreteIndices = state.metaData().concreteIndices(request.indices(), request.indicesOptions());
if (concreteIndices.length == 0) {
listener.onResponse(new TypesExistsResponse(false));

View File

@ -28,7 +28,7 @@ import java.io.IOException;
/**
* A flush request to flush one or more indices. The flush process of an index basically frees memory from the index
* by flushing data to the index storage and clearing the internal transaction log. By default, ElasticSearch uses
* by flushing data to the index storage and clearing the internal transaction log. By default, Elasticsearch uses
* memory heuristics in order to automatically trigger flush operations as required in order to clear memory.
* <p/>
* <p>Best created with {@link org.elasticsearch.client.Requests#flushRequest(String...)}.

Some files were not shown because too many files have changed in this diff Show More