Merge branch 'master' into die_cwd_die

This commit is contained in:
Ryan Ernst 2015-04-29 16:48:13 -07:00
commit d8fed71fd4
345 changed files with 13155 additions and 2461 deletions

View File

@ -99,3 +99,22 @@ By default, `BulkProcessor`:
* does not set flushInterval
* sets concurrentRequests to 1
When all documents are loaded to the `BulkProcessor` it can be closed by using `awaitClose` or `close` methods:
[source,java]
--------------------------------------------------
bulkProcessor.awaitClose(10, TimeUnit.MINUTES);
--------------------------------------------------
or
[source,java]
--------------------------------------------------
bulkProcessor.close();
--------------------------------------------------
Both methods flush any remaining documents and disable all other scheduled flushes if they were scheduled by setting
`flushInterval`. If concurrent requests were enabled the `awaitClose` method waits for up to the specified timeout for
all bulk requests to complete then returns `true`, if the specified waiting time elapses before all bulk requests complete,
`false` is returned. The `close` method doesn't wait for any remaining bulk requests to complete and exists immediately.

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 65 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

View File

@ -426,6 +426,9 @@ and it can be retrieved from it).
in `_source`, have `include_in_all` enabled, or `store` be set to
`true` for this to be useful.
|`doc_values` |Set to `true` to store field values in a column-stride fashion.
Automatically set to `true` when the fielddata format is `doc_values`.
|`boost` |The boost value. Defaults to `1.0`.
|`null_value` |When there is a (JSON) null value for the field, use the

View File

@ -118,6 +118,38 @@ aggregated for the buckets created by their "parent" bucket aggregation.
There are different bucket aggregators, each with a different "bucketing" strategy. Some define a single bucket, some
define fixed number of multiple buckets, and others dynamically create the buckets during the aggregation process.
[float]
=== Reducer Aggregations
coming[2.0.0]
experimental[]
Reducer aggregations work on the outputs produced from other aggregations rather than from document sets, adding
information to the output tree. There are many different types of reducer, each computing different information from
other aggregations, but these types can broken down into two families:
_Parent_::
A family of reducer aggregations that is provided with the output of its parent aggregation and is able
to compute new buckets or new aggregations to add to existing buckets.
_Sibling_::
Reducer aggregations that are provided with the output of a sibling aggregation and are able to compute a
new aggregation which will be at the same level as the sibling aggregation.
Reducer aggregations can reference the aggregations they need to perform their computation by using the `buckets_paths`
parameter to indicate the paths to the required metrics. The syntax for defining these paths can be found in the
<<search-aggregations-bucket-terms-aggregation-order, terms aggregation order>> section.
?????? SHOULD THE SECTION ABOUT DEFINING AGGREGATION PATHS
BE IN THIS PAGE AND REFERENCED FROM THE TERMS AGGREGATION DOCUMENTATION ???????
Reducer aggregations cannot have sub-aggregations but depending on the type it can reference another reducer in the `buckets_path`
allowing reducers to be chained.
NOTE: Because reducer aggregations only add to the output, when chaining reducer aggregations the output of each reducer will be
included in the final output.
[float]
=== Caching heavy aggregations
@ -197,3 +229,6 @@ Then that piece of metadata will be returned in place for our `titles` terms agg
include::aggregations/metrics.asciidoc[]
include::aggregations/bucket.asciidoc[]
include::aggregations/reducer.asciidoc[]

View File

@ -0,0 +1,5 @@
[[search-aggregations-reducer]]
include::reducer/derivative-aggregation.asciidoc[]
include::reducer/max-bucket-aggregation.asciidoc[]
include::reducer/movavg-aggregation.asciidoc[]

View File

@ -0,0 +1,194 @@
[[search-aggregations-reducer-derivative-aggregation]]
=== Derivative Aggregation
A parent reducer aggregation which calculates the derivative of a specified metric in a parent histogram (or date_histogram)
aggregation. The specified metric must be numeric and the enclosing histogram must have `min_doc_count` set to `0`.
The following snippet calculates the derivative of the total monthly `sales`:
[source,js]
--------------------------------------------------
{
"aggs" : {
"sales_per_month" : {
"date_histogram" : {
"field" : "date",
"interval" : "month",
"min_doc_count" : 0
},
"aggs": {
"sales": {
"sum": {
"field": "price"
}
},
"sales_deriv": {
"derivative": {
"buckets_paths": "sales" <1>
}
}
}
}
}
}
--------------------------------------------------
<1> `bucket_paths` instructs this derivative aggregation to use the output of the `sales` aggregation for the derivative
And the following may be the response:
[source,js]
--------------------------------------------------
{
"aggregations": {
"sales_per_month": {
"buckets": [
{
"key_as_string": "2015/01/01 00:00:00",
"key": 1420070400000,
"doc_count": 3,
"sales": {
"value": 550
} <1>
},
{
"key_as_string": "2015/02/01 00:00:00",
"key": 1422748800000,
"doc_count": 2,
"sales": {
"value": 60
},
"sales_deriv": {
"value": -490 <2>
}
},
{
"key_as_string": "2015/03/01 00:00:00",
"key": 1425168000000,
"doc_count": 2, <3>
"sales": {
"value": 375
},
"sales_deriv": {
"value": 315
}
}
]
}
}
}
--------------------------------------------------
<1> No derivative for the first bucket since we need at least 2 data points to calculate the derivative
<2> Derivative value units are implicitly defined by the `sales` aggregation and the parent histogram so in this case the units
would be $/month assuming the `price` field has units of $.
<3> The number of documents in the bucket are represented by the `doc_count` value
==== Second Order Derivative
A second order derivative can be calculated by chaining the derivative reducer aggregation onto the result of another derivative
reducer aggregation as in the following example which will calculate both the first and the second order derivative of the total
monthly sales:
[source,js]
--------------------------------------------------
{
"aggs" : {
"sales_per_month" : {
"date_histogram" : {
"field" : "date",
"interval" : "month"
},
"aggs": {
"sales": {
"sum": {
"field": "price"
}
},
"sales_deriv": {
"derivative": {
"buckets_paths": "sales"
}
},
"sales_2nd_deriv": {
"derivative": {
"buckets_paths": "sales_deriv" <1>
}
}
}
}
}
}
--------------------------------------------------
<1> `bucket_paths` for the second derivative points to the name of the first derivative
And the following may be the response:
[source,js]
--------------------------------------------------
{
"aggregations": {
"sales_per_month": {
"buckets": [
{
"key_as_string": "2015/01/01 00:00:00",
"key": 1420070400000,
"doc_count": 3,
"sales": {
"value": 550
} <1>
},
{
"key_as_string": "2015/02/01 00:00:00",
"key": 1422748800000,
"doc_count": 2,
"sales": {
"value": 60
},
"sales_deriv": {
"value": -490
} <1>
},
{
"key_as_string": "2015/03/01 00:00:00",
"key": 1425168000000,
"doc_count": 2,
"sales": {
"value": 375
},
"sales_deriv": {
"value": 315
},
"sales_2nd_deriv": {
"value": 805
}
}
]
}
}
}
--------------------------------------------------
<1> No second derivative for the first two buckets since we need at least 2 data points from the first derivative to calculate the
second derivative
==== Dealing with gaps in the data
There are a couple of reasons why the data output by the enclosing histogram may have gaps:
* There are no documents matching the query for some buckets
* The data for a metric is missing in all of the documents falling into a bucket (this is most likely with either a small interval
on the enclosing histogram or with a query matching only a small number of documents)
Where there is no data available in a bucket for a given metric it presents a problem for calculating the derivative value for both
the current bucket and the next bucket. In the derivative reducer aggregation has a `gap_policy` parameter to define what the behavior
should be when a gap in the data is found. There are currently two options for controlling the gap policy:
_ignore_::
This option will not produce a derivative value for any buckets where the value in the current or previous bucket is
missing
_insert_zeros_::
This option will assume the missing value is `0` and calculate the derivative with the value `0`.

View File

@ -0,0 +1,82 @@
[[search-aggregations-reducer-max-bucket-aggregation]]
=== Max Bucket Aggregation
A sibling reducer aggregation which identifies the bucket(s) with the maximum value of a specified metric in a sibing aggregation
and outputs both the value and the key(s) of the bucket(s). The specified metric must be numeric and the sibling aggregation must
be a multi-bucket aggregation.
The following snippet calculates the maximum of the total monthly `sales`:
[source,js]
--------------------------------------------------
{
"aggs" : {
"sales_per_month" : {
"date_histogram" : {
"field" : "date",
"interval" : "month"
},
"aggs": {
"sales": {
"sum": {
"field": "price"
}
}
}
},
"max_monthly_sales": {
"max_bucket": {
"buckets_paths": "sales_per_month>sales" <1>
}
}
}
}
--------------------------------------------------
<1> `bucket_paths` instructs this max_bucket aggregation that we want the maximum value of the `sales` aggregation in the
`sales_per_month` date histogram.
And the following may be the response:
[source,js]
--------------------------------------------------
{
"aggregations": {
"sales_per_month": {
"buckets": [
{
"key_as_string": "2015/01/01 00:00:00",
"key": 1420070400000,
"doc_count": 3,
"sales": {
"value": 550
}
},
{
"key_as_string": "2015/02/01 00:00:00",
"key": 1422748800000,
"doc_count": 2,
"sales": {
"value": 60
}
},
{
"key_as_string": "2015/03/01 00:00:00",
"key": 1425168000000,
"doc_count": 2,
"sales": {
"value": 375
}
}
]
},
"max_monthly_sales": {
"keys": ["2015/01/01 00:00:00"], <1>
"value": 550
}
}
}
--------------------------------------------------
<1> `keys` is an array of strings since the maximum value may be present in multiple buckets

View File

@ -0,0 +1,297 @@
[[search-aggregations-reducers-movavg-reducer]]
=== Moving Average Aggregation
Given an ordered series of data, the Moving Average aggregation will slide a window across the data and emit the average
value of that window. For example, given the data `[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]`, we can calculate a simple moving
average with windows size of `5` as follows:
- (1 + 2 + 3 + 4 + 5) / 5 = 3
- (2 + 3 + 4 + 5 + 6) / 5 = 4
- (3 + 4 + 5 + 6 + 7) / 5 = 5
- etc
Moving averages are a simple method to smooth sequential data. Moving averages are typically applied to time-based data,
such as stock prices or server metrics. The smoothing can be used to eliminate high frequency fluctuations or random noise,
which allows the lower frequency trends to be more easily visualized, such as seasonality.
==== Syntax
A `moving_avg` aggregation looks like this in isolation:
[source,js]
--------------------------------------------------
{
"movavg": {
"buckets_path": "the_sum",
"model": "double_exp",
"window": 5,
"gap_policy": "insert_zero",
"settings": {
"alpha": 0.8
}
}
}
--------------------------------------------------
.`moving_avg` Parameters
|===
|Parameter Name |Description |Required |Default
|`buckets_path` |The path to the metric that we wish to calculate a moving average for |Required |
|`model` |The moving average weighting model that we wish to use |Optional |`simple`
|`gap_policy` |Determines what should happen when a gap in the data is encountered. |Optional |`insert_zero`
|`window` |The size of window to "slide" across the histogram. |Optional |`5`
|`settings` |Model-specific settings, contents which differ depending on the model specified. |Optional |
|===
`moving_avg` aggregations must be embedded inside of a `histogram` or `date_histogram` aggregation. They can be
embedded like any other metric aggregation:
[source,js]
--------------------------------------------------
{
"my_date_histo":{ <1>
"date_histogram":{
"field":"timestamp",
"interval":"day",
"min_doc_count": 0 <2>
},
"aggs":{
"the_sum":{
"sum":{ "field": "lemmings" } <3>
},
"the_movavg":{
"moving_avg":{ "buckets_path": "the_sum" } <4>
}
}
}
}
--------------------------------------------------
<1> A `date_histogram` named "my_date_histo" is constructed on the "timestamp" field, with one-day intervals
<2> We must specify "min_doc_count: 0" in our date histogram that all buckets are returned, even if they are empty.
<3> A `sum` metric is used to calculate the sum of a field. This could be any metric (sum, min, max, etc)
<4> Finally, we specify a `moving_avg` aggregation which uses "the_sum" metric as its input.
Moving averages are built by first specifying a `histogram` or `date_histogram` over a field. You can then optionally
add normal metrics, such as a `sum`, inside of that histogram. Finally, the `moving_avg` is embedded inside the histogram.
The `buckets_path` parameter is then used to "point" at one of the sibling metrics inside of the histogram.
A moving average can also be calculated on the document count of each bucket, instead of a metric:
[source,js]
--------------------------------------------------
{
"my_date_histo":{
"date_histogram":{
"field":"timestamp",
"interval":"day",
"min_doc_count": 0
},
"aggs":{
"the_movavg":{
"moving_avg":{ "buckets_path": "_count" } <1>
}
}
}
}
--------------------------------------------------
<1> By using `_count` instead of a metric name, we can calculate the moving average of document counts in the histogram
==== Models
The `moving_avg` aggregation includes four different moving average "models". The main difference is how the values in the
window are weighted. As data-points become "older" in the window, they may be weighted differently. This will
affect the final average for that window.
Models are specified using the `model` parameter. Some models may have optional configurations which are specified inside
the `settings` parameter.
===== Simple
The `simple` model calculates the sum of all values in the window, then divides by the size of the window. It is effectively
a simple arithmetic mean of the window. The simple model does not perform any time-dependent weighting, which means
the values from a `simple` moving average tend to "lag" behind the real data.
[source,js]
--------------------------------------------------
{
"the_movavg":{
"moving_avg":{
"buckets_path": "the_sum",
"model" : "simple"
}
}
}
--------------------------------------------------
A `simple` model has no special settings to configure
The window size can change the behavior of the moving average. For example, a small window (`"window": 10`) will closely
track the data and only smooth out small scale fluctuations:
[[movavg_10window]]
.Moving average with window of size 10
image::images/reducers_movavg/movavg_10window.png[]
In contrast, a `simple` moving average with larger window (`"window": 100`) will smooth out all higher-frequency fluctuations,
leaving only low-frequency, long term trends. It also tends to "lag" behind the actual data by a substantial amount:
[[movavg_100window]]
.Moving average with window of size 100
image::images/reducers_movavg/movavg_100window.png[]
==== Linear
The `linear` model assigns a linear weighting to points in the series, such that "older" datapoints (e.g. those at
the beginning of the window) contribute a linearly less amount to the total average. The linear weighting helps reduce
the "lag" behind the data's mean, since older points have less influence.
[source,js]
--------------------------------------------------
{
"the_movavg":{
"moving_avg":{
"buckets_path": "the_sum",
"model" : "linear"
}
}
--------------------------------------------------
A `linear` model has no special settings to configure
Like the `simple` model, window size can change the behavior of the moving average. For example, a small window (`"window": 10`)
will closely track the data and only smooth out small scale fluctuations:
[[linear_10window]]
.Linear moving average with window of size 10
image::images/reducers_movavg/linear_10window.png[]
In contrast, a `linear` moving average with larger window (`"window": 100`) will smooth out all higher-frequency fluctuations,
leaving only low-frequency, long term trends. It also tends to "lag" behind the actual data by a substantial amount,
although typically less than the `simple` model:
[[linear_100window]]
.Linear moving average with window of size 100
image::images/reducers_movavg/linear_100window.png[]
==== Single Exponential
The `single_exp` model is similar to the `linear` model, except older data-points become exponentially less important,
rather than linearly less important. The speed at which the importance decays can be controlled with an `alpha`
setting. Small values make the weight decay slowly, which provides greater smoothing and takes into account a larger
portion of the window. Larger valuers make the weight decay quickly, which reduces the impact of older values on the
moving average. This tends to make the moving average track the data more closely but with less smoothing.
The default value of `alpha` is `0.5`, and the setting accepts any float from 0-1 inclusive.
[source,js]
--------------------------------------------------
{
"the_movavg":{
"moving_avg":{
"buckets_path": "the_sum",
"model" : "single_exp",
"settings" : {
"alpha" : 0.5
}
}
}
--------------------------------------------------
[[single_0.2alpha]]
.Single Exponential moving average with window of size 10, alpha = 0.2
image::images/reducers_movavg/single_0.2alpha.png[]
[[single_0.7alpha]]
.Single Exponential moving average with window of size 10, alpha = 0.7
image::images/reducers_movavg/single_0.7alpha.png[]
==== Double Exponential
The `double_exp` model, sometimes called "Holt's Linear Trend" model, incorporates a second exponential term which
tracks the data's trend. Single exponential does not perform well when the data has an underlying linear trend. The
double exponential model calculates two values internally: a "level" and a "trend".
The level calculation is similar to `single_exp`, and is an exponentially weighted view of the data. The difference is
that the previously smoothed value is used instead of the raw value, which allows it to stay close to the original series.
The trend calculation looks at the difference between the current and last value (e.g. the slope, or trend, of the
smoothed data). The trend value is also exponentially weighted.
Values are produced by multiplying the level and trend components.
The default value of `alpha` and `beta` is `0.5`, and the settings accept any float from 0-1 inclusive.
[source,js]
--------------------------------------------------
{
"the_movavg":{
"moving_avg":{
"buckets_path": "the_sum",
"model" : "double_exp",
"settings" : {
"alpha" : 0.5,
"beta" : 0.5
}
}
}
--------------------------------------------------
In practice, the `alpha` value behaves very similarly in `double_exp` as `single_exp`: small values produce more smoothing
and more lag, while larger values produce closer tracking and less lag. The value of `beta` is often difficult
to see. Small values emphasize long-term trends (such as a constant linear trend in the whole series), while larger
values emphasize short-term trends. This will become more apparently when you are predicting values.
[[double_0.2beta]]
.Double Exponential moving average with window of size 100, alpha = 0.5, beta = 0.2
image::images/reducers_movavg/double_0.2beta.png[]
[[double_0.7beta]]
.Double Exponential moving average with window of size 100, alpha = 0.5, beta = 0.7
image::images/reducers_movavg/double_0.7beta.png[]
=== Prediction
All the moving average model support a "prediction" mode, which will attempt to extrapolate into the future given the
current smoothed, moving average. Depending on the model and parameter, these predictions may or may not be accurate.
Predictions are enabled by adding a `predict` parameter to any moving average aggregation, specifying the nubmer of
predictions you would like appended to the end of the series. These predictions will be spaced out at the same interval
as your buckets:
[source,js]
--------------------------------------------------
{
"the_movavg":{
"moving_avg":{
"buckets_path": "the_sum",
"model" : "simple",
"predict" 10
}
}
--------------------------------------------------
The `simple`, `linear` and `single_exp` models all produce "flat" predictions: they essentially converge on the mean
of the last value in the series, producing a flat:
[[simple_prediction]]
.Simple moving average with window of size 10, predict = 50
image::images/reducers_movavg/simple_prediction.png[]
In contrast, the `double_exp` model can extrapolate based on local or global constant trends. If we set a high `beta`
value, we can extrapolate based on local constant trends (in this case the predictions head down, because the data at the end
of the series was heading in a downward direction):
[[double_prediction_local]]
.Double Exponential moving average with window of size 100, predict = 20, alpha = 0.5, beta = 0.8
image::images/reducers_movavg/double_prediction_local.png[]
In contrast, if we choose a small `beta`, the predictions are based on the global constant trend. In this series, the
global trend is slightly positive, so the prediction makes a sharp u-turn and begins a positive slope:
[[double_prediction_global]]
.Double Exponential moving average with window of size 100, predict = 20, alpha = 0.5, beta = 0.1
image::images/reducers_movavg/double_prediction_global.png[]

View File

@ -39,6 +39,7 @@ import org.elasticsearch.common.lucene.search.Queries;
import org.elasticsearch.common.unit.Fuzziness;
import org.elasticsearch.index.mapper.FieldMapper;
import org.elasticsearch.index.mapper.MapperService;
import org.elasticsearch.index.mapper.core.DateFieldMapper;
import org.elasticsearch.index.query.QueryParseContext;
import org.elasticsearch.index.query.support.QueryParsers;
@ -131,9 +132,6 @@ public class MapperQueryParser extends QueryParser {
setFuzzyMinSim(settings.fuzzyMinSim());
setFuzzyPrefixLength(settings.fuzzyPrefixLength());
setLocale(settings.locale());
if (settings.timeZone() != null) {
setTimeZone(settings.timeZone().toTimeZone());
}
this.analyzeWildcard = settings.analyzeWildcard();
}
@ -377,7 +375,14 @@ public class MapperQueryParser extends QueryParser {
}
try {
return currentMapper.rangeQuery(part1, part2, startInclusive, endInclusive, parseContext);
Query rangeQuery;
if (currentMapper instanceof DateFieldMapper && settings.timeZone() != null) {
DateFieldMapper dateFieldMapper = (DateFieldMapper) this.currentMapper;
rangeQuery = dateFieldMapper.rangeQuery(part1, part2, startInclusive, endInclusive, settings.timeZone(), null, parseContext);
} else {
rangeQuery = currentMapper.rangeQuery(part1, part2, startInclusive, endInclusive, parseContext);
}
return rangeQuery;
} catch (RuntimeException e) {
if (settings.lenient()) {
return null;

View File

@ -68,7 +68,7 @@ public class ClusterRerouteResponse extends AcknowledgedResponse {
@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
ClusterState.Builder.writeTo(state, out);
state.writeTo(out);
writeAcknowledged(out);
RoutingExplanations.writeTo(explanations, out);
}

View File

@ -62,6 +62,6 @@ public class ClusterStateResponse extends ActionResponse {
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
clusterName.writeTo(out);
ClusterState.Builder.writeTo(clusterState, out);
clusterState.writeTo(out);
}
}

View File

@ -19,7 +19,6 @@
package org.elasticsearch.action.admin.cluster.state;
import com.carrotsearch.hppc.cursors.ObjectCursor;
import com.carrotsearch.hppc.cursors.ObjectObjectCursor;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.action.ActionListener;
@ -29,7 +28,6 @@ import org.elasticsearch.cluster.ClusterName;
import org.elasticsearch.cluster.ClusterService;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.block.ClusterBlockException;
import org.elasticsearch.cluster.block.ClusterBlockLevel;
import org.elasticsearch.cluster.metadata.IndexMetaData;
import org.elasticsearch.cluster.metadata.MetaData;
import org.elasticsearch.cluster.metadata.MetaData.Custom;
@ -39,11 +37,6 @@ import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;
import java.util.List;
import static com.google.common.collect.Lists.newArrayList;
import static org.elasticsearch.cluster.metadata.MetaData.lookupFactorySafe;
/**
*
*/
@ -84,6 +77,7 @@ public class TransportClusterStateAction extends TransportMasterNodeReadOperatio
logger.trace("Serving cluster state request using version {}", currentState.version());
ClusterState.Builder builder = ClusterState.builder(currentState.getClusterName());
builder.version(currentState.version());
builder.uuid(currentState.uuid());
if (request.nodes()) {
builder.nodes(currentState.nodes());
}
@ -122,10 +116,9 @@ public class TransportClusterStateAction extends TransportMasterNodeReadOperatio
}
// Filter our metadata that shouldn't be returned by API
for(ObjectCursor<String> type : currentState.metaData().customs().keys()) {
Custom.Factory factory = lookupFactorySafe(type.value);
if(!factory.context().contains(MetaData.XContentContext.API)) {
mdBuilder.removeCustom(type.value);
for(ObjectObjectCursor<String, Custom> custom : currentState.metaData().customs()) {
if(!custom.value.context().contains(MetaData.XContentContext.API)) {
mdBuilder.removeCustom(custom.key);
}
}

View File

@ -74,7 +74,7 @@ public class GetAliasesResponse extends ActionResponse {
out.writeString(entry.key);
out.writeVInt(entry.value.size());
for (AliasMetaData aliasMetaData : entry.value) {
AliasMetaData.Builder.writeTo(aliasMetaData, out);
aliasMetaData.writeTo(out);
}
}
}

View File

@ -396,11 +396,11 @@ public class CreateIndexRequest extends AcknowledgedRequest<CreateIndexRequest>
aliases((Map<String, Object>) entry.getValue());
} else {
// maybe custom?
IndexMetaData.Custom.Factory factory = IndexMetaData.lookupFactory(name);
if (factory != null) {
IndexMetaData.Custom proto = IndexMetaData.lookupPrototype(name);
if (proto != null) {
found = true;
try {
customs.put(name, factory.fromMap((Map<String, Object>) entry.getValue()));
customs.put(name, proto.fromMap((Map<String, Object>) entry.getValue()));
} catch (IOException e) {
throw new ElasticsearchParseException("failed to parse custom metadata for [" + name + "]");
}
@ -448,7 +448,7 @@ public class CreateIndexRequest extends AcknowledgedRequest<CreateIndexRequest>
int customSize = in.readVInt();
for (int i = 0; i < customSize; i++) {
String type = in.readString();
IndexMetaData.Custom customIndexMetaData = IndexMetaData.lookupFactorySafe(type).readFrom(in);
IndexMetaData.Custom customIndexMetaData = IndexMetaData.lookupPrototypeSafe(type).readFrom(in);
customs.put(type, customIndexMetaData);
}
int aliasesSize = in.readVInt();
@ -472,7 +472,7 @@ public class CreateIndexRequest extends AcknowledgedRequest<CreateIndexRequest>
out.writeVInt(customs.size());
for (Map.Entry<String, IndexMetaData.Custom> entry : customs.entrySet()) {
out.writeString(entry.getKey());
IndexMetaData.lookupFactorySafe(entry.getKey()).writeTo(entry.getValue(), out);
entry.getValue().writeTo(out);
}
out.writeVInt(aliases.size());
for (Alias alias : aliases) {

View File

@ -134,7 +134,7 @@ public class GetIndexResponse extends ActionResponse {
int valueSize = in.readVInt();
ImmutableOpenMap.Builder<String, MappingMetaData> mappingEntryBuilder = ImmutableOpenMap.builder();
for (int j = 0; j < valueSize; j++) {
mappingEntryBuilder.put(in.readString(), MappingMetaData.readFrom(in));
mappingEntryBuilder.put(in.readString(), MappingMetaData.PROTO.readFrom(in));
}
mappingsMapBuilder.put(key, mappingEntryBuilder.build());
}
@ -181,7 +181,7 @@ public class GetIndexResponse extends ActionResponse {
out.writeVInt(indexEntry.value.size());
for (ObjectObjectCursor<String, MappingMetaData> mappingEntry : indexEntry.value) {
out.writeString(mappingEntry.key);
MappingMetaData.writeTo(mappingEntry.value, out);
mappingEntry.value.writeTo(out);
}
}
out.writeVInt(aliases.size());
@ -189,7 +189,7 @@ public class GetIndexResponse extends ActionResponse {
out.writeString(indexEntry.key);
out.writeVInt(indexEntry.value.size());
for (AliasMetaData aliasEntry : indexEntry.value) {
AliasMetaData.Builder.writeTo(aliasEntry, out);
aliasEntry.writeTo(out);
}
}
out.writeVInt(settings.size());

View File

@ -59,7 +59,7 @@ public class GetMappingsResponse extends ActionResponse {
int valueSize = in.readVInt();
ImmutableOpenMap.Builder<String, MappingMetaData> typeMapBuilder = ImmutableOpenMap.builder();
for (int j = 0; j < valueSize; j++) {
typeMapBuilder.put(in.readString(), MappingMetaData.readFrom(in));
typeMapBuilder.put(in.readString(), MappingMetaData.PROTO.readFrom(in));
}
indexMapBuilder.put(key, typeMapBuilder.build());
}
@ -75,7 +75,7 @@ public class GetMappingsResponse extends ActionResponse {
out.writeVInt(indexEntry.value.size());
for (ObjectObjectCursor<String, MappingMetaData> typeEntry : indexEntry.value) {
out.writeString(typeEntry.key);
MappingMetaData.writeTo(typeEntry.value, out);
typeEntry.value.writeTo(out);
}
}
}

View File

@ -60,7 +60,7 @@ public class GetIndexTemplatesResponse extends ActionResponse {
super.writeTo(out);
out.writeVInt(indexTemplates.size());
for (IndexTemplateMetaData indexTemplate : indexTemplates) {
IndexTemplateMetaData.Builder.writeTo(indexTemplate, out);
indexTemplate.writeTo(out);
}
}
}

View File

@ -292,10 +292,10 @@ public class PutIndexTemplateRequest extends MasterNodeOperationRequest<PutIndex
aliases((Map<String, Object>) entry.getValue());
} else {
// maybe custom?
IndexMetaData.Custom.Factory factory = IndexMetaData.lookupFactory(name);
if (factory != null) {
IndexMetaData.Custom proto = IndexMetaData.lookupPrototype(name);
if (proto != null) {
try {
customs.put(name, factory.fromMap((Map<String, Object>) entry.getValue()));
customs.put(name, proto.fromMap((Map<String, Object>) entry.getValue()));
} catch (IOException e) {
throw new ElasticsearchParseException("failed to parse custom metadata for [" + name + "]");
}
@ -440,7 +440,7 @@ public class PutIndexTemplateRequest extends MasterNodeOperationRequest<PutIndex
int customSize = in.readVInt();
for (int i = 0; i < customSize; i++) {
String type = in.readString();
IndexMetaData.Custom customIndexMetaData = IndexMetaData.lookupFactorySafe(type).readFrom(in);
IndexMetaData.Custom customIndexMetaData = IndexMetaData.lookupPrototypeSafe(type).readFrom(in);
customs.put(type, customIndexMetaData);
}
int aliasesSize = in.readVInt();
@ -466,7 +466,7 @@ public class PutIndexTemplateRequest extends MasterNodeOperationRequest<PutIndex
out.writeVInt(customs.size());
for (Map.Entry<String, IndexMetaData.Custom> entry : customs.entrySet()) {
out.writeString(entry.getKey());
IndexMetaData.lookupFactorySafe(entry.getKey()).writeTo(entry.getValue(), out);
entry.getValue().writeTo(out);
}
out.writeVInt(aliases.size());
for (Alias alias : aliases) {

View File

@ -28,7 +28,9 @@ import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.index.query.FilterBuilder;
import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.search.aggregations.AbstractAggregationBuilder;
import org.elasticsearch.search.aggregations.AggregationBuilder;
import org.elasticsearch.search.aggregations.reducers.ReducerBuilder;
import org.elasticsearch.search.highlight.HighlightBuilder;
import org.elasticsearch.search.sort.SortBuilder;
@ -162,9 +164,9 @@ public class PercolateRequestBuilder extends BroadcastOperationRequestBuilder<Pe
}
/**
* Delegates to {@link PercolateSourceBuilder#addAggregation(AggregationBuilder)}
* Delegates to {@link PercolateSourceBuilder#addAggregation(AbstractAggregationBuilder)}
*/
public PercolateRequestBuilder addAggregation(AggregationBuilder aggregationBuilder) {
public PercolateRequestBuilder addAggregation(AbstractAggregationBuilder aggregationBuilder) {
sourceBuilder().addAggregation(aggregationBuilder);
return this;
}

View File

@ -19,13 +19,18 @@
package org.elasticsearch.action.percolate;
import com.google.common.collect.ImmutableList;
import org.apache.lucene.util.BytesRef;
import org.elasticsearch.action.support.broadcast.BroadcastShardOperationResponse;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.index.shard.ShardId;
import org.elasticsearch.percolator.PercolateContext;
import org.elasticsearch.search.aggregations.InternalAggregations;
import org.elasticsearch.search.aggregations.reducers.Reducer;
import org.elasticsearch.search.aggregations.reducers.ReducerStreams;
import org.elasticsearch.search.aggregations.reducers.SiblingReducer;
import org.elasticsearch.search.highlight.HighlightField;
import org.elasticsearch.search.query.QuerySearchResult;
@ -51,6 +56,7 @@ public class PercolateShardResponse extends BroadcastShardOperationResponse {
private int requestedSize;
private InternalAggregations aggregations;
private List<SiblingReducer> reducers;
PercolateShardResponse() {
hls = new ArrayList<>();
@ -69,6 +75,7 @@ public class PercolateShardResponse extends BroadcastShardOperationResponse {
if (result.aggregations() != null) {
this.aggregations = (InternalAggregations) result.aggregations();
}
this.reducers = result.reducers();
}
}
@ -112,6 +119,10 @@ public class PercolateShardResponse extends BroadcastShardOperationResponse {
return aggregations;
}
public List<SiblingReducer> reducers() {
return reducers;
}
public byte percolatorTypeId() {
return percolatorTypeId;
}
@ -144,6 +155,16 @@ public class PercolateShardResponse extends BroadcastShardOperationResponse {
hls.add(fields);
}
aggregations = InternalAggregations.readOptionalAggregations(in);
if (in.readBoolean()) {
int reducersSize = in.readVInt();
List<SiblingReducer> reducers = new ArrayList<>(reducersSize);
for (int i = 0; i < reducersSize; i++) {
BytesReference type = in.readBytesReference();
Reducer reducer = ReducerStreams.stream(type).readResult(in);
reducers.add((SiblingReducer) reducer);
}
this.reducers = reducers;
}
}
@Override
@ -169,5 +190,15 @@ public class PercolateShardResponse extends BroadcastShardOperationResponse {
}
}
out.writeOptionalStreamable(aggregations);
if (reducers == null) {
out.writeBoolean(false);
} else {
out.writeBoolean(true);
out.writeVInt(reducers.size());
for (Reducer reducer : reducers) {
out.writeBytesReference(reducer.type().stream());
reducer.writeTo(out);
}
}
}
}

View File

@ -29,6 +29,7 @@ import org.elasticsearch.index.query.FilterBuilder;
import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.search.aggregations.AbstractAggregationBuilder;
import org.elasticsearch.search.aggregations.AggregationBuilder;
import org.elasticsearch.search.aggregations.reducers.ReducerBuilder;
import org.elasticsearch.search.highlight.HighlightBuilder;
import org.elasticsearch.search.sort.ScoreSortBuilder;
import org.elasticsearch.search.sort.SortBuilder;
@ -50,7 +51,7 @@ public class PercolateSourceBuilder implements ToXContent {
private List<SortBuilder> sorts;
private Boolean trackScores;
private HighlightBuilder highlightBuilder;
private List<AggregationBuilder> aggregations;
private List<AbstractAggregationBuilder> aggregations;
/**
* Sets the document to run the percolate queries against.
@ -130,7 +131,7 @@ public class PercolateSourceBuilder implements ToXContent {
/**
* Add an aggregation definition.
*/
public PercolateSourceBuilder addAggregation(AggregationBuilder aggregationBuilder) {
public PercolateSourceBuilder addAggregation(AbstractAggregationBuilder aggregationBuilder) {
if (aggregations == null) {
aggregations = Lists.newArrayList();
}

View File

@ -34,6 +34,7 @@ import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.script.ScriptService;
import org.elasticsearch.search.Scroll;
import org.elasticsearch.search.aggregations.AbstractAggregationBuilder;
import org.elasticsearch.search.aggregations.reducers.ReducerBuilder;
import org.elasticsearch.search.builder.SearchSourceBuilder;
import org.elasticsearch.search.fetch.innerhits.InnerHitsBuilder;
import org.elasticsearch.search.highlight.HighlightBuilder;

View File

@ -0,0 +1,108 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.cluster;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.io.stream.StreamableReader;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import java.io.IOException;
/**
* Abstract diffable object with simple diffs implementation that sends the entire object if object has changed or
* nothing is object remained the same.
*/
public abstract class AbstractDiffable<T extends Diffable<T>> implements Diffable<T> {
@Override
public Diff<T> diff(T previousState) {
if (this.get().equals(previousState)) {
return new CompleteDiff<>();
} else {
return new CompleteDiff<>(get());
}
}
@Override
public Diff<T> readDiffFrom(StreamInput in) throws IOException {
return new CompleteDiff<>(this, in);
}
public static <T extends Diffable<T>> Diff<T> readDiffFrom(StreamableReader<T> reader, StreamInput in) throws IOException {
return new CompleteDiff<T>(reader, in);
}
private static class CompleteDiff<T extends Diffable<T>> implements Diff<T> {
@Nullable
private final T part;
/**
* Creates simple diff with changes
*/
public CompleteDiff(T part) {
this.part = part;
}
/**
* Creates simple diff without changes
*/
public CompleteDiff() {
this.part = null;
}
/**
* Read simple diff from the stream
*/
public CompleteDiff(StreamableReader<T> reader, StreamInput in) throws IOException {
if (in.readBoolean()) {
this.part = reader.readFrom(in);
} else {
this.part = null;
}
}
@Override
public void writeTo(StreamOutput out) throws IOException {
if (part != null) {
out.writeBoolean(true);
part.writeTo(out);
} else {
out.writeBoolean(false);
}
}
@Override
public T apply(T part) {
if (this.part != null) {
return this.part;
} else {
return part;
}
}
}
@SuppressWarnings("unchecked")
public T get() {
return (T) this;
}
}

View File

@ -22,6 +22,7 @@ package org.elasticsearch.cluster;
import com.carrotsearch.hppc.cursors.ObjectCursor;
import com.carrotsearch.hppc.cursors.ObjectObjectCursor;
import com.google.common.collect.ImmutableSet;
import org.elasticsearch.cluster.DiffableUtils.KeyedReader;
import org.elasticsearch.cluster.block.ClusterBlock;
import org.elasticsearch.cluster.block.ClusterBlocks;
import org.elasticsearch.cluster.metadata.IndexMetaData;
@ -55,7 +56,9 @@ import java.util.Map;
/**
*
*/
public class ClusterState implements ToXContent {
public class ClusterState implements ToXContent, Diffable<ClusterState> {
public static final ClusterState PROTO = builder(ClusterName.DEFAULT).build();
public static enum ClusterStateStatus {
UNKNOWN((byte) 0),
@ -74,47 +77,43 @@ public class ClusterState implements ToXContent {
}
}
public interface Custom {
public interface Custom extends Diffable<Custom>, ToXContent {
interface Factory<T extends Custom> {
String type();
T readFrom(StreamInput in) throws IOException;
void writeTo(T customState, StreamOutput out) throws IOException;
void toXContent(T customState, XContentBuilder builder, ToXContent.Params params);
}
String type();
}
private final static Map<String, Custom.Factory> customFactories = new HashMap<>();
private final static Map<String, Custom> customPrototypes = new HashMap<>();
/**
* Register a custom index meta data factory. Make sure to call it from a static block.
*/
public static void registerFactory(String type, Custom.Factory factory) {
customFactories.put(type, factory);
public static void registerPrototype(String type, Custom proto) {
customPrototypes.put(type, proto);
}
@Nullable
public static <T extends Custom> Custom.Factory<T> lookupFactory(String type) {
return customFactories.get(type);
public static <T extends Custom> T lookupPrototype(String type) {
//noinspection unchecked
return (T) customPrototypes.get(type);
}
public static <T extends Custom> Custom.Factory<T> lookupFactorySafe(String type) {
Custom.Factory<T> factory = customFactories.get(type);
if (factory == null) {
throw new IllegalArgumentException("No custom state factory registered for type [" + type + "]");
public static <T extends Custom> T lookupPrototypeSafe(String type) {
@SuppressWarnings("unchecked")
T proto = (T)customPrototypes.get(type);
if (proto == null) {
throw new IllegalArgumentException("No custom state prototype registered for type [" + type + "]");
}
return factory;
return proto;
}
public static final String UNKNOWN_UUID = "_na_";
public static final long UNKNOWN_VERSION = -1;
private final long version;
private final String uuid;
private final RoutingTable routingTable;
private final DiscoveryNodes nodes;
@ -127,17 +126,20 @@ public class ClusterState implements ToXContent {
private final ClusterName clusterName;
private final boolean wasReadFromDiff;
// built on demand
private volatile RoutingNodes routingNodes;
private volatile ClusterStateStatus status;
public ClusterState(long version, ClusterState state) {
this(state.clusterName, version, state.metaData(), state.routingTable(), state.nodes(), state.blocks(), state.customs());
public ClusterState(long version, String uuid, ClusterState state) {
this(state.clusterName, version, uuid, state.metaData(), state.routingTable(), state.nodes(), state.blocks(), state.customs(), false);
}
public ClusterState(ClusterName clusterName, long version, MetaData metaData, RoutingTable routingTable, DiscoveryNodes nodes, ClusterBlocks blocks, ImmutableOpenMap<String, Custom> customs) {
public ClusterState(ClusterName clusterName, long version, String uuid, MetaData metaData, RoutingTable routingTable, DiscoveryNodes nodes, ClusterBlocks blocks, ImmutableOpenMap<String, Custom> customs, boolean wasReadFromDiff) {
this.version = version;
this.uuid = uuid;
this.clusterName = clusterName;
this.metaData = metaData;
this.routingTable = routingTable;
@ -145,6 +147,7 @@ public class ClusterState implements ToXContent {
this.blocks = blocks;
this.customs = customs;
this.status = ClusterStateStatus.UNKNOWN;
this.wasReadFromDiff = wasReadFromDiff;
}
public ClusterStateStatus status() {
@ -164,6 +167,14 @@ public class ClusterState implements ToXContent {
return version();
}
/**
* This uuid is automatically generated for for each version of cluster state. It is used to make sure that
* we are applying diffs to the right previous state.
*/
public String uuid() {
return this.uuid;
}
public DiscoveryNodes nodes() {
return this.nodes;
}
@ -216,6 +227,11 @@ public class ClusterState implements ToXContent {
return this.clusterName;
}
// Used for testing and logging to determine how this cluster state was send over the wire
boolean wasReadFromDiff() {
return wasReadFromDiff;
}
/**
* Returns a built (on demand) routing nodes view of the routing table. <b>NOTE, the routing nodes
* are mutable, use them just for read operations</b>
@ -231,6 +247,8 @@ public class ClusterState implements ToXContent {
public String prettyPrint() {
StringBuilder sb = new StringBuilder();
sb.append("version: ").append(version).append("\n");
sb.append("uuid: ").append(uuid).append("\n");
sb.append("from_diff: ").append(wasReadFromDiff).append("\n");
sb.append("meta data version: ").append(metaData.version()).append("\n");
sb.append(nodes().prettyPrint());
sb.append(routingTable().prettyPrint());
@ -302,14 +320,13 @@ public class ClusterState implements ToXContent {
}
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
EnumSet<Metric> metrics = Metric.parseString(params.param("metric", "_all"), true);
if (metrics.contains(Metric.VERSION)) {
builder.field("version", version);
builder.field("uuid", uuid);
}
if (metrics.contains(Metric.MASTER_NODE)) {
@ -434,7 +451,7 @@ public class ClusterState implements ToXContent {
for (ObjectObjectCursor<String, MetaData.Custom> cursor : metaData.customs()) {
builder.startObject(cursor.key);
MetaData.lookupFactorySafe(cursor.key).toXContent(cursor.value, builder, params);
cursor.value.toXContent(builder, params);
builder.endObject();
}
@ -473,7 +490,7 @@ public class ClusterState implements ToXContent {
builder.startObject("nodes");
for (RoutingNode routingNode : readOnlyRoutingNodes()) {
builder.startArray(routingNode.nodeId(), XContentBuilder.FieldCaseConversion.NONE);
builder.startArray(routingNode.nodeId() == null ? "null" : routingNode.nodeId(), XContentBuilder.FieldCaseConversion.NONE);
for (ShardRouting shardRouting : routingNode) {
shardRouting.toXContent(builder, params);
}
@ -486,7 +503,7 @@ public class ClusterState implements ToXContent {
if (metrics.contains(Metric.CUSTOMS)) {
for (ObjectObjectCursor<String, Custom> cursor : customs) {
builder.startObject(cursor.key);
lookupFactorySafe(cursor.key).toXContent(cursor.value, builder, params);
cursor.value.toXContent(builder, params);
builder.endObject();
}
}
@ -506,21 +523,25 @@ public class ClusterState implements ToXContent {
private final ClusterName clusterName;
private long version = 0;
private String uuid = UNKNOWN_UUID;
private MetaData metaData = MetaData.EMPTY_META_DATA;
private RoutingTable routingTable = RoutingTable.EMPTY_ROUTING_TABLE;
private DiscoveryNodes nodes = DiscoveryNodes.EMPTY_NODES;
private ClusterBlocks blocks = ClusterBlocks.EMPTY_CLUSTER_BLOCK;
private final ImmutableOpenMap.Builder<String, Custom> customs;
private boolean fromDiff;
public Builder(ClusterState state) {
this.clusterName = state.clusterName;
this.version = state.version();
this.uuid = state.uuid();
this.nodes = state.nodes();
this.routingTable = state.routingTable();
this.metaData = state.metaData();
this.blocks = state.blocks();
this.customs = ImmutableOpenMap.builder(state.customs());
this.fromDiff = false;
}
public Builder(ClusterName clusterName) {
@ -574,6 +595,17 @@ public class ClusterState implements ToXContent {
return this;
}
public Builder incrementVersion() {
this.version = version + 1;
this.uuid = UNKNOWN_UUID;
return this;
}
public Builder uuid(String uuid) {
this.uuid = uuid;
return this;
}
public Custom getCustom(String type) {
return customs.get(type);
}
@ -588,13 +620,26 @@ public class ClusterState implements ToXContent {
return this;
}
public Builder customs(ImmutableOpenMap<String, Custom> customs) {
this.customs.putAll(customs);
return this;
}
public Builder fromDiff(boolean fromDiff) {
this.fromDiff = fromDiff;
return this;
}
public ClusterState build() {
return new ClusterState(clusterName, version, metaData, routingTable, nodes, blocks, customs.build());
if (UNKNOWN_UUID.equals(uuid)) {
uuid = Strings.randomBase64UUID();
}
return new ClusterState(clusterName, version, uuid, metaData, routingTable, nodes, blocks, customs.build(), fromDiff);
}
public static byte[] toBytes(ClusterState state) throws IOException {
BytesStreamOutput os = new BytesStreamOutput();
writeTo(state, os);
state.writeTo(os);
return os.bytes().toBytes();
}
@ -606,39 +651,152 @@ public class ClusterState implements ToXContent {
return readFrom(new BytesStreamInput(data), localNode);
}
public static void writeTo(ClusterState state, StreamOutput out) throws IOException {
state.clusterName.writeTo(out);
out.writeLong(state.version());
MetaData.Builder.writeTo(state.metaData(), out);
RoutingTable.Builder.writeTo(state.routingTable(), out);
DiscoveryNodes.Builder.writeTo(state.nodes(), out);
ClusterBlocks.Builder.writeClusterBlocks(state.blocks(), out);
out.writeVInt(state.customs().size());
for (ObjectObjectCursor<String, Custom> cursor : state.customs()) {
out.writeString(cursor.key);
lookupFactorySafe(cursor.key).writeTo(cursor.value, out);
}
}
/**
* @param in input stream
* @param localNode used to set the local node in the cluster state. can be null.
*/
public static ClusterState readFrom(StreamInput in, @Nullable DiscoveryNode localNode) throws IOException {
ClusterName clusterName = ClusterName.readClusterName(in);
return PROTO.readFrom(in, localNode);
}
}
@Override
public Diff diff(ClusterState previousState) {
return new ClusterStateDiff(previousState, this);
}
@Override
public Diff<ClusterState> readDiffFrom(StreamInput in) throws IOException {
return new ClusterStateDiff(in, this);
}
public ClusterState readFrom(StreamInput in, DiscoveryNode localNode) throws IOException {
ClusterName clusterName = ClusterName.readClusterName(in);
Builder builder = new Builder(clusterName);
builder.version = in.readLong();
builder.uuid = in.readString();
builder.metaData = MetaData.Builder.readFrom(in);
builder.routingTable = RoutingTable.Builder.readFrom(in);
builder.nodes = DiscoveryNodes.Builder.readFrom(in, localNode);
builder.blocks = ClusterBlocks.Builder.readClusterBlocks(in);
int customSize = in.readVInt();
for (int i = 0; i < customSize; i++) {
String type = in.readString();
Custom customIndexMetaData = lookupPrototypeSafe(type).readFrom(in);
builder.putCustom(type, customIndexMetaData);
}
return builder.build();
}
@Override
public ClusterState readFrom(StreamInput in) throws IOException {
return readFrom(in, nodes.localNode());
}
@Override
public void writeTo(StreamOutput out) throws IOException {
clusterName.writeTo(out);
out.writeLong(version);
out.writeString(uuid);
metaData.writeTo(out);
routingTable.writeTo(out);
nodes.writeTo(out);
blocks.writeTo(out);
out.writeVInt(customs.size());
for (ObjectObjectCursor<String, Custom> cursor : customs) {
out.writeString(cursor.key);
cursor.value.writeTo(out);
}
}
private static class ClusterStateDiff implements Diff<ClusterState> {
private final long toVersion;
private final String fromUuid;
private final String toUuid;
private final ClusterName clusterName;
private final Diff<RoutingTable> routingTable;
private final Diff<DiscoveryNodes> nodes;
private final Diff<MetaData> metaData;
private final Diff<ClusterBlocks> blocks;
private final Diff<ImmutableOpenMap<String, Custom>> customs;
public ClusterStateDiff(ClusterState before, ClusterState after) {
fromUuid = before.uuid;
toUuid = after.uuid;
toVersion = after.version;
clusterName = after.clusterName;
routingTable = after.routingTable.diff(before.routingTable);
nodes = after.nodes.diff(before.nodes);
metaData = after.metaData.diff(before.metaData);
blocks = after.blocks.diff(before.blocks);
customs = DiffableUtils.diff(before.customs, after.customs);
}
public ClusterStateDiff(StreamInput in, ClusterState proto) throws IOException {
clusterName = ClusterName.readClusterName(in);
fromUuid = in.readString();
toUuid = in.readString();
toVersion = in.readLong();
routingTable = proto.routingTable.readDiffFrom(in);
nodes = proto.nodes.readDiffFrom(in);
metaData = proto.metaData.readDiffFrom(in);
blocks = proto.blocks.readDiffFrom(in);
customs = DiffableUtils.readImmutableOpenMapDiff(in, new KeyedReader<Custom>() {
@Override
public Custom readFrom(StreamInput in, String key) throws IOException {
return lookupPrototypeSafe(key).readFrom(in);
}
@Override
public Diff<Custom> readDiffFrom(StreamInput in, String key) throws IOException {
return lookupPrototypeSafe(key).readDiffFrom(in);
}
});
}
@Override
public void writeTo(StreamOutput out) throws IOException {
clusterName.writeTo(out);
out.writeString(fromUuid);
out.writeString(toUuid);
out.writeLong(toVersion);
routingTable.writeTo(out);
nodes.writeTo(out);
metaData.writeTo(out);
blocks.writeTo(out);
customs.writeTo(out);
}
@Override
public ClusterState apply(ClusterState state) {
Builder builder = new Builder(clusterName);
builder.version = in.readLong();
builder.metaData = MetaData.Builder.readFrom(in);
builder.routingTable = RoutingTable.Builder.readFrom(in);
builder.nodes = DiscoveryNodes.Builder.readFrom(in, localNode);
builder.blocks = ClusterBlocks.Builder.readClusterBlocks(in);
int customSize = in.readVInt();
for (int i = 0; i < customSize; i++) {
String type = in.readString();
Custom customIndexMetaData = lookupFactorySafe(type).readFrom(in);
builder.putCustom(type, customIndexMetaData);
if (toUuid.equals(state.uuid)) {
// no need to read the rest - cluster state didn't change
return state;
}
if (fromUuid.equals(state.uuid) == false) {
throw new IncompatibleClusterStateVersionException(state.version, state.uuid, toVersion, fromUuid);
}
builder.uuid(toUuid);
builder.version(toVersion);
builder.routingTable(routingTable.apply(state.routingTable));
builder.nodes(nodes.apply(state.nodes));
builder.metaData(metaData.apply(state.metaData));
builder.blocks(blocks.apply(state.blocks));
builder.customs(customs.apply(state.customs));
builder.fromDiff(true);
return builder.build();
}
}
}

View File

@ -0,0 +1,42 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.cluster;
import org.elasticsearch.common.io.stream.StreamOutput;
import java.io.IOException;
/**
* Represents difference between states of cluster state parts
*/
public interface Diff<T> {
/**
* Applies difference to the specified part and retunrs the resulted part
*/
T apply(T part);
/**
* Writes the differences into the output stream
* @param out
* @throws IOException
*/
void writeTo(StreamOutput out) throws IOException;
}

View File

@ -0,0 +1,42 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.cluster;
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.io.stream.StreamInput;
import java.io.IOException;
/**
* Cluster state part, changes in which can be serialized
*/
public interface Diffable<T> extends Writeable<T> {
/**
* Returns serializable object representing differences between this and previousState
*/
Diff<T> diff(T previousState);
/**
* Reads the {@link org.elasticsearch.cluster.Diff} from StreamInput
*/
Diff<T> readDiffFrom(StreamInput in) throws IOException;
}

View File

@ -0,0 +1,283 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.cluster;
import com.carrotsearch.hppc.cursors.ObjectCursor;
import com.carrotsearch.hppc.cursors.ObjectObjectCursor;
import com.google.common.collect.ImmutableMap;
import org.elasticsearch.common.collect.ImmutableOpenMap;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import java.io.IOException;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import static com.google.common.collect.Lists.newArrayList;
import static com.google.common.collect.Maps.newHashMap;
public final class DiffableUtils {
private DiffableUtils() {
}
/**
* Calculates diff between two ImmutableOpenMaps of Diffable objects
*/
public static <T extends Diffable<T>> Diff<ImmutableOpenMap<String, T>> diff(ImmutableOpenMap<String, T> before, ImmutableOpenMap<String, T> after) {
assert after != null && before != null;
return new ImmutableOpenMapDiff<>(before, after);
}
/**
* Calculates diff between two ImmutableMaps of Diffable objects
*/
public static <T extends Diffable<T>> Diff<ImmutableMap<String, T>> diff(ImmutableMap<String, T> before, ImmutableMap<String, T> after) {
assert after != null && before != null;
return new ImmutableMapDiff<>(before, after);
}
/**
* Loads an object that represents difference between two ImmutableOpenMaps
*/
public static <T extends Diffable<T>> Diff<ImmutableOpenMap<String, T>> readImmutableOpenMapDiff(StreamInput in, KeyedReader<T> keyedReader) throws IOException {
return new ImmutableOpenMapDiff<>(in, keyedReader);
}
/**
* Loads an object that represents difference between two ImmutableMaps
*/
public static <T extends Diffable<T>> Diff<ImmutableMap<String, T>> readImmutableMapDiff(StreamInput in, KeyedReader<T> keyedReader) throws IOException {
return new ImmutableMapDiff<>(in, keyedReader);
}
/**
* Loads an object that represents difference between two ImmutableOpenMaps
*/
public static <T extends Diffable<T>> Diff<ImmutableOpenMap<String, T>> readImmutableOpenMapDiff(StreamInput in, T proto) throws IOException {
return new ImmutableOpenMapDiff<>(in, new PrototypeReader<>(proto));
}
/**
* Loads an object that represents difference between two ImmutableMaps
*/
public static <T extends Diffable<T>> Diff<ImmutableMap<String, T>> readImmutableMapDiff(StreamInput in, T proto) throws IOException {
return new ImmutableMapDiff<>(in, new PrototypeReader<>(proto));
}
/**
* A reader that can deserialize an object. The reader can select the deserialization type based on the key. It's
* used in custom metadata deserialization.
*/
public interface KeyedReader<T> {
/**
* reads an object of the type T from the stream input
*/
T readFrom(StreamInput in, String key) throws IOException;
/**
* reads an object that respresents differences between two objects with the type T from the stream input
*/
Diff<T> readDiffFrom(StreamInput in, String key) throws IOException;
}
/**
* Implementation of the KeyedReader that is using a prototype object for reading operations
*
* Note: this implementation is ignoring the key.
*/
public static class PrototypeReader<T extends Diffable<T>> implements KeyedReader<T> {
private T proto;
public PrototypeReader(T proto) {
this.proto = proto;
}
@Override
public T readFrom(StreamInput in, String key) throws IOException {
return proto.readFrom(in);
}
@Override
public Diff<T> readDiffFrom(StreamInput in, String key) throws IOException {
return proto.readDiffFrom(in);
}
}
/**
* Represents differences between two ImmutableMaps of diffable objects
*
* @param <T> the diffable object
*/
private static class ImmutableMapDiff<T extends Diffable<T>> extends MapDiff<T, ImmutableMap<String, T>> {
protected ImmutableMapDiff(StreamInput in, KeyedReader<T> reader) throws IOException {
super(in, reader);
}
public ImmutableMapDiff(ImmutableMap<String, T> before, ImmutableMap<String, T> after) {
assert after != null && before != null;
for (String key : before.keySet()) {
if (!after.containsKey(key)) {
deletes.add(key);
}
}
for (ImmutableMap.Entry<String, T> partIter : after.entrySet()) {
T beforePart = before.get(partIter.getKey());
if (beforePart == null) {
adds.put(partIter.getKey(), partIter.getValue());
} else if (partIter.getValue().equals(beforePart) == false) {
diffs.put(partIter.getKey(), partIter.getValue().diff(beforePart));
}
}
}
@Override
public ImmutableMap<String, T> apply(ImmutableMap<String, T> map) {
HashMap<String, T> builder = newHashMap();
builder.putAll(map);
for (String part : deletes) {
builder.remove(part);
}
for (Map.Entry<String, Diff<T>> diff : diffs.entrySet()) {
builder.put(diff.getKey(), diff.getValue().apply(builder.get(diff.getKey())));
}
for (Map.Entry<String, T> additon : adds.entrySet()) {
builder.put(additon.getKey(), additon.getValue());
}
return ImmutableMap.copyOf(builder);
}
}
/**
* Represents differences between two ImmutableOpenMap of diffable objects
*
* @param <T> the diffable object
*/
private static class ImmutableOpenMapDiff<T extends Diffable<T>> extends MapDiff<T, ImmutableOpenMap<String, T>> {
protected ImmutableOpenMapDiff(StreamInput in, KeyedReader<T> reader) throws IOException {
super(in, reader);
}
public ImmutableOpenMapDiff(ImmutableOpenMap<String, T> before, ImmutableOpenMap<String, T> after) {
assert after != null && before != null;
for (ObjectCursor<String> key : before.keys()) {
if (!after.containsKey(key.value)) {
deletes.add(key.value);
}
}
for (ObjectObjectCursor<String, T> partIter : after) {
T beforePart = before.get(partIter.key);
if (beforePart == null) {
adds.put(partIter.key, partIter.value);
} else if (partIter.value.equals(beforePart) == false) {
diffs.put(partIter.key, partIter.value.diff(beforePart));
}
}
}
@Override
public ImmutableOpenMap<String, T> apply(ImmutableOpenMap<String, T> map) {
ImmutableOpenMap.Builder<String, T> builder = ImmutableOpenMap.builder();
builder.putAll(map);
for (String part : deletes) {
builder.remove(part);
}
for (Map.Entry<String, Diff<T>> diff : diffs.entrySet()) {
builder.put(diff.getKey(), diff.getValue().apply(builder.get(diff.getKey())));
}
for (Map.Entry<String, T> additon : adds.entrySet()) {
builder.put(additon.getKey(), additon.getValue());
}
return builder.build();
}
}
/**
* Represents differences between two maps of diffable objects
*
* This class is used as base class for different map implementations
*
* @param <T> the diffable object
*/
private static abstract class MapDiff<T extends Diffable<T>, M> implements Diff<M> {
protected final List<String> deletes;
protected final Map<String, Diff<T>> diffs;
protected final Map<String, T> adds;
protected MapDiff() {
deletes = newArrayList();
diffs = newHashMap();
adds = newHashMap();
}
protected MapDiff(StreamInput in, KeyedReader<T> reader) throws IOException {
deletes = newArrayList();
diffs = newHashMap();
adds = newHashMap();
int deletesCount = in.readVInt();
for (int i = 0; i < deletesCount; i++) {
deletes.add(in.readString());
}
int diffsCount = in.readVInt();
for (int i = 0; i < diffsCount; i++) {
String key = in.readString();
Diff<T> diff = reader.readDiffFrom(in, key);
diffs.put(key, diff);
}
int addsCount = in.readVInt();
for (int i = 0; i < addsCount; i++) {
String key = in.readString();
T part = reader.readFrom(in, key);
adds.put(key, part);
}
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeVInt(deletes.size());
for (String delete : deletes) {
out.writeString(delete);
}
out.writeVInt(diffs.size());
for (Map.Entry<String, Diff<T>> entry : diffs.entrySet()) {
out.writeString(entry.getKey());
entry.getValue().writeTo(out);
}
out.writeVInt(adds.size());
for (Map.Entry<String, T> entry : adds.entrySet()) {
out.writeString(entry.getKey());
entry.getValue().writeTo(out);
}
}
}
}

View File

@ -0,0 +1,35 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.cluster;
import org.elasticsearch.ElasticsearchException;
/**
* Thrown by {@link Diffable#readDiffAndApply(org.elasticsearch.common.io.stream.StreamInput)} method
*/
public class IncompatibleClusterStateVersionException extends ElasticsearchException {
public IncompatibleClusterStateVersionException(String msg) {
super(msg);
}
public IncompatibleClusterStateVersionException(long expectedVersion, String expectedUuid, long receivedVersion, String receivedUuid) {
super("Expected diff for version " + expectedVersion + " with uuid " + expectedUuid + " got version " + receivedVersion + " and uuid " + receivedUuid);
}
}

View File

@ -23,6 +23,7 @@ import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Maps;
import com.google.common.collect.Sets;
import org.elasticsearch.cluster.AbstractDiffable;
import org.elasticsearch.cluster.metadata.IndexMetaData;
import org.elasticsearch.cluster.metadata.MetaDataIndexStateService;
import org.elasticsearch.common.io.stream.StreamInput;
@ -36,10 +37,12 @@ import java.util.Set;
/**
* Represents current cluster level blocks to block dirty operations done against the cluster.
*/
public class ClusterBlocks {
public class ClusterBlocks extends AbstractDiffable<ClusterBlocks> {
public static final ClusterBlocks EMPTY_CLUSTER_BLOCK = new ClusterBlocks(ImmutableSet.<ClusterBlock>of(), ImmutableMap.<String, ImmutableSet<ClusterBlock>>of());
public static final ClusterBlocks PROTO = EMPTY_CLUSTER_BLOCK;
private final ImmutableSet<ClusterBlock> global;
private final ImmutableMap<String, ImmutableSet<ClusterBlock>> indicesBlocks;
@ -203,6 +206,43 @@ public class ClusterBlocks {
return new ClusterBlockException(builder.build());
}
@Override
public void writeTo(StreamOutput out) throws IOException {
writeBlockSet(global, out);
out.writeVInt(indicesBlocks.size());
for (Map.Entry<String, ImmutableSet<ClusterBlock>> entry : indicesBlocks.entrySet()) {
out.writeString(entry.getKey());
writeBlockSet(entry.getValue(), out);
}
}
private static void writeBlockSet(ImmutableSet<ClusterBlock> blocks, StreamOutput out) throws IOException {
out.writeVInt(blocks.size());
for (ClusterBlock block : blocks) {
block.writeTo(out);
}
}
@Override
public ClusterBlocks readFrom(StreamInput in) throws IOException {
ImmutableSet<ClusterBlock> global = readBlockSet(in);
ImmutableMap.Builder<String, ImmutableSet<ClusterBlock>> indicesBuilder = ImmutableMap.builder();
int size = in.readVInt();
for (int j = 0; j < size; j++) {
indicesBuilder.put(in.readString().intern(), readBlockSet(in));
}
return new ClusterBlocks(global, indicesBuilder.build());
}
private static ImmutableSet<ClusterBlock> readBlockSet(StreamInput in) throws IOException {
ImmutableSet.Builder<ClusterBlock> builder = ImmutableSet.builder();
int size = in.readVInt();
for (int i = 0; i < size; i++) {
builder.add(ClusterBlock.readClusterBlock(in));
}
return builder.build();
}
static class ImmutableLevelHolder {
static final ImmutableLevelHolder EMPTY = new ImmutableLevelHolder(ImmutableSet.<ClusterBlock>of(), ImmutableMap.<String, ImmutableSet<ClusterBlock>>of());
@ -313,38 +353,7 @@ public class ClusterBlocks {
}
public static ClusterBlocks readClusterBlocks(StreamInput in) throws IOException {
ImmutableSet<ClusterBlock> global = readBlockSet(in);
ImmutableMap.Builder<String, ImmutableSet<ClusterBlock>> indicesBuilder = ImmutableMap.builder();
int size = in.readVInt();
for (int j = 0; j < size; j++) {
indicesBuilder.put(in.readString().intern(), readBlockSet(in));
}
return new ClusterBlocks(global, indicesBuilder.build());
}
public static void writeClusterBlocks(ClusterBlocks blocks, StreamOutput out) throws IOException {
writeBlockSet(blocks.global(), out);
out.writeVInt(blocks.indices().size());
for (Map.Entry<String, ImmutableSet<ClusterBlock>> entry : blocks.indices().entrySet()) {
out.writeString(entry.getKey());
writeBlockSet(entry.getValue(), out);
}
}
private static void writeBlockSet(ImmutableSet<ClusterBlock> blocks, StreamOutput out) throws IOException {
out.writeVInt(blocks.size());
for (ClusterBlock block : blocks) {
block.writeTo(out);
}
}
private static ImmutableSet<ClusterBlock> readBlockSet(StreamInput in) throws IOException {
ImmutableSet.Builder<ClusterBlock> builder = ImmutableSet.builder();
int size = in.readVInt();
for (int i = 0; i < size; i++) {
builder.add(ClusterBlock.readClusterBlock(in));
}
return builder.build();
return PROTO.readFrom(in);
}
}
}

View File

@ -21,6 +21,7 @@ package org.elasticsearch.cluster.metadata;
import com.google.common.collect.ImmutableSet;
import org.elasticsearch.ElasticsearchGenerationException;
import org.elasticsearch.cluster.AbstractDiffable;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.compress.CompressedString;
import org.elasticsearch.common.io.stream.StreamInput;
@ -38,7 +39,9 @@ import java.util.Set;
/**
*
*/
public class AliasMetaData {
public class AliasMetaData extends AbstractDiffable<AliasMetaData> {
public static final AliasMetaData PROTO = new AliasMetaData("", null, null, null);
private final String alias;
@ -146,6 +149,48 @@ public class AliasMetaData {
return result;
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeString(alias());
if (filter() != null) {
out.writeBoolean(true);
filter.writeTo(out);
} else {
out.writeBoolean(false);
}
if (indexRouting() != null) {
out.writeBoolean(true);
out.writeString(indexRouting());
} else {
out.writeBoolean(false);
}
if (searchRouting() != null) {
out.writeBoolean(true);
out.writeString(searchRouting());
} else {
out.writeBoolean(false);
}
}
@Override
public AliasMetaData readFrom(StreamInput in) throws IOException {
String alias = in.readString();
CompressedString filter = null;
if (in.readBoolean()) {
filter = CompressedString.readCompressedString(in);
}
String indexRouting = null;
if (in.readBoolean()) {
indexRouting = in.readString();
}
String searchRouting = null;
if (in.readBoolean()) {
searchRouting = in.readString();
}
return new AliasMetaData(alias, filter, indexRouting, searchRouting);
}
public static class Builder {
private final String alias;
@ -294,44 +339,12 @@ public class AliasMetaData {
return builder.build();
}
public static void writeTo(AliasMetaData aliasMetaData, StreamOutput out) throws IOException {
out.writeString(aliasMetaData.alias());
if (aliasMetaData.filter() != null) {
out.writeBoolean(true);
aliasMetaData.filter.writeTo(out);
} else {
out.writeBoolean(false);
}
if (aliasMetaData.indexRouting() != null) {
out.writeBoolean(true);
out.writeString(aliasMetaData.indexRouting());
} else {
out.writeBoolean(false);
}
if (aliasMetaData.searchRouting() != null) {
out.writeBoolean(true);
out.writeString(aliasMetaData.searchRouting());
} else {
out.writeBoolean(false);
}
public void writeTo(AliasMetaData aliasMetaData, StreamOutput out) throws IOException {
aliasMetaData.writeTo(out);
}
public static AliasMetaData readFrom(StreamInput in) throws IOException {
String alias = in.readString();
CompressedString filter = null;
if (in.readBoolean()) {
filter = CompressedString.readCompressedString(in);
}
String indexRouting = null;
if (in.readBoolean()) {
indexRouting = in.readString();
}
String searchRouting = null;
if (in.readBoolean()) {
searchRouting = in.readString();
}
return new AliasMetaData(alias, filter, indexRouting, searchRouting);
return PROTO.readFrom(in);
}
}

View File

@ -24,6 +24,9 @@ import com.carrotsearch.hppc.cursors.ObjectObjectCursor;
import com.google.common.base.Preconditions;
import com.google.common.collect.ImmutableMap;
import org.elasticsearch.Version;
import org.elasticsearch.cluster.Diff;
import org.elasticsearch.cluster.Diffable;
import org.elasticsearch.cluster.DiffableUtils;
import org.elasticsearch.cluster.block.ClusterBlock;
import org.elasticsearch.cluster.block.ClusterBlockLevel;
import org.elasticsearch.cluster.node.DiscoveryNodeFilters;
@ -59,60 +62,54 @@ import static org.elasticsearch.common.settings.ImmutableSettings.*;
/**
*
*/
public class IndexMetaData {
public class IndexMetaData implements Diffable<IndexMetaData> {
public static final IndexMetaData PROTO = IndexMetaData.builder("")
.settings(ImmutableSettings.builder().put(IndexMetaData.SETTING_VERSION_CREATED, Version.CURRENT))
.numberOfShards(1).numberOfReplicas(0).build();
public interface Custom {
public interface Custom extends Diffable<Custom>, ToXContent {
String type();
interface Factory<T extends Custom> {
Custom fromMap(Map<String, Object> map) throws IOException;
String type();
Custom fromXContent(XContentParser parser) throws IOException;
T readFrom(StreamInput in) throws IOException;
void writeTo(T customIndexMetaData, StreamOutput out) throws IOException;
T fromMap(Map<String, Object> map) throws IOException;
T fromXContent(XContentParser parser) throws IOException;
void toXContent(T customIndexMetaData, XContentBuilder builder, ToXContent.Params params) throws IOException;
/**
* Merges from first to second, with first being more important, i.e., if something exists in first and second,
* first will prevail.
*/
T merge(T first, T second);
}
/**
* Merges from this to another, with this being more important, i.e., if something exists in this and another,
* this will prevail.
*/
Custom mergeWith(Custom another);
}
public static Map<String, Custom.Factory> customFactories = new HashMap<>();
public static Map<String, Custom> customPrototypes = new HashMap<>();
static {
// register non plugin custom metadata
registerFactory(IndexWarmersMetaData.TYPE, IndexWarmersMetaData.FACTORY);
registerPrototype(IndexWarmersMetaData.TYPE, IndexWarmersMetaData.PROTO);
}
/**
* Register a custom index meta data factory. Make sure to call it from a static block.
*/
public static void registerFactory(String type, Custom.Factory factory) {
customFactories.put(type, factory);
public static void registerPrototype(String type, Custom proto) {
customPrototypes.put(type, proto);
}
@Nullable
public static <T extends Custom> Custom.Factory<T> lookupFactory(String type) {
return customFactories.get(type);
public static <T extends Custom> T lookupPrototype(String type) {
//noinspection unchecked
return (T) customPrototypes.get(type);
}
public static <T extends Custom> Custom.Factory<T> lookupFactorySafe(String type) {
Custom.Factory<T> factory = customFactories.get(type);
if (factory == null) {
throw new IllegalArgumentException("No custom index metadata factoy registered for type [" + type + "]");
public static <T extends Custom> T lookupPrototypeSafe(String type) {
//noinspection unchecked
T proto = (T) customPrototypes.get(type);
if (proto == null) {
throw new IllegalArgumentException("No custom metadata prototype registered for type [" + type + "]");
}
return factory;
return proto;
}
public static final ClusterBlock INDEX_READ_ONLY_BLOCK = new ClusterBlock(5, "index read-only (api)", false, false, RestStatus.FORBIDDEN, EnumSet.of(ClusterBlockLevel.WRITE, ClusterBlockLevel.METADATA_WRITE));
@ -451,7 +448,9 @@ public class IndexMetaData {
if (state != that.state) {
return false;
}
if (!customs.equals(that.customs)) {
return false;
}
return true;
}
@ -465,6 +464,126 @@ public class IndexMetaData {
return result;
}
@Override
public Diff<IndexMetaData> diff(IndexMetaData previousState) {
return new IndexMetaDataDiff(previousState, this);
}
@Override
public Diff<IndexMetaData> readDiffFrom(StreamInput in) throws IOException {
return new IndexMetaDataDiff(in);
}
private static class IndexMetaDataDiff implements Diff<IndexMetaData> {
private final String index;
private final long version;
private final State state;
private final Settings settings;
private final Diff<ImmutableOpenMap<String, MappingMetaData>> mappings;
private final Diff<ImmutableOpenMap<String, AliasMetaData>> aliases;
private Diff<ImmutableOpenMap<String, Custom>> customs;
public IndexMetaDataDiff(IndexMetaData before, IndexMetaData after) {
index = after.index;
version = after.version;
state = after.state;
settings = after.settings;
mappings = DiffableUtils.diff(before.mappings, after.mappings);
aliases = DiffableUtils.diff(before.aliases, after.aliases);
customs = DiffableUtils.diff(before.customs, after.customs);
}
public IndexMetaDataDiff(StreamInput in) throws IOException {
index = in.readString();
version = in.readLong();
state = State.fromId(in.readByte());
settings = ImmutableSettings.readSettingsFromStream(in);
mappings = DiffableUtils.readImmutableOpenMapDiff(in, MappingMetaData.PROTO);
aliases = DiffableUtils.readImmutableOpenMapDiff(in, AliasMetaData.PROTO);
customs = DiffableUtils.readImmutableOpenMapDiff(in, new DiffableUtils.KeyedReader<Custom>() {
@Override
public Custom readFrom(StreamInput in, String key) throws IOException {
return lookupPrototypeSafe(key).readFrom(in);
}
@Override
public Diff<Custom> readDiffFrom(StreamInput in, String key) throws IOException {
return lookupPrototypeSafe(key).readDiffFrom(in);
}
});
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeString(index);
out.writeLong(version);
out.writeByte(state.id);
ImmutableSettings.writeSettingsToStream(settings, out);
mappings.writeTo(out);
aliases.writeTo(out);
customs.writeTo(out);
}
@Override
public IndexMetaData apply(IndexMetaData part) {
Builder builder = builder(index);
builder.version(version);
builder.state(state);
builder.settings(settings);
builder.mappings.putAll(mappings.apply(part.mappings));
builder.aliases.putAll(aliases.apply(part.aliases));
builder.customs.putAll(customs.apply(part.customs));
return builder.build();
}
}
@Override
public IndexMetaData readFrom(StreamInput in) throws IOException {
Builder builder = new Builder(in.readString());
builder.version(in.readLong());
builder.state(State.fromId(in.readByte()));
builder.settings(readSettingsFromStream(in));
int mappingsSize = in.readVInt();
for (int i = 0; i < mappingsSize; i++) {
MappingMetaData mappingMd = MappingMetaData.PROTO.readFrom(in);
builder.putMapping(mappingMd);
}
int aliasesSize = in.readVInt();
for (int i = 0; i < aliasesSize; i++) {
AliasMetaData aliasMd = AliasMetaData.Builder.readFrom(in);
builder.putAlias(aliasMd);
}
int customSize = in.readVInt();
for (int i = 0; i < customSize; i++) {
String type = in.readString();
Custom customIndexMetaData = lookupPrototypeSafe(type).readFrom(in);
builder.putCustom(type, customIndexMetaData);
}
return builder.build();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeString(index);
out.writeLong(version);
out.writeByte(state.id());
writeSettingsToStream(settings, out);
out.writeVInt(mappings.size());
for (ObjectCursor<MappingMetaData> cursor : mappings.values()) {
cursor.value.writeTo(out);
}
out.writeVInt(aliases.size());
for (ObjectCursor<AliasMetaData> cursor : aliases.values()) {
cursor.value.writeTo(out);
}
out.writeVInt(customs.size());
for (ObjectObjectCursor<String, Custom> cursor : customs) {
out.writeString(cursor.key);
cursor.value.writeTo(out);
}
}
public static Builder builder(String index) {
return new Builder(index);
}
@ -660,7 +779,7 @@ public class IndexMetaData {
for (ObjectObjectCursor<String, Custom> cursor : indexMetaData.customs()) {
builder.startObject(cursor.key, XContentBuilder.FieldCaseConversion.NONE);
lookupFactorySafe(cursor.key).toXContent(cursor.value, builder, params);
cursor.value.toXContent(builder, params);
builder.endObject();
}
@ -707,12 +826,13 @@ public class IndexMetaData {
}
} else {
// check if its a custom index metadata
Custom.Factory<Custom> factory = lookupFactory(currentFieldName);
if (factory == null) {
Custom proto = lookupPrototype(currentFieldName);
if (proto == null) {
//TODO warn
parser.skipChildren();
} else {
builder.putCustom(factory.type(), factory.fromXContent(parser));
Custom custom = proto.fromXContent(parser);
builder.putCustom(custom.type(), custom);
}
}
} else if (token == XContentParser.Token.START_ARRAY) {
@ -741,47 +861,7 @@ public class IndexMetaData {
}
public static IndexMetaData readFrom(StreamInput in) throws IOException {
Builder builder = new Builder(in.readString());
builder.version(in.readLong());
builder.state(State.fromId(in.readByte()));
builder.settings(readSettingsFromStream(in));
int mappingsSize = in.readVInt();
for (int i = 0; i < mappingsSize; i++) {
MappingMetaData mappingMd = MappingMetaData.readFrom(in);
builder.putMapping(mappingMd);
}
int aliasesSize = in.readVInt();
for (int i = 0; i < aliasesSize; i++) {
AliasMetaData aliasMd = AliasMetaData.Builder.readFrom(in);
builder.putAlias(aliasMd);
}
int customSize = in.readVInt();
for (int i = 0; i < customSize; i++) {
String type = in.readString();
Custom customIndexMetaData = lookupFactorySafe(type).readFrom(in);
builder.putCustom(type, customIndexMetaData);
}
return builder.build();
}
public static void writeTo(IndexMetaData indexMetaData, StreamOutput out) throws IOException {
out.writeString(indexMetaData.index());
out.writeLong(indexMetaData.version());
out.writeByte(indexMetaData.state().id());
writeSettingsToStream(indexMetaData.settings(), out);
out.writeVInt(indexMetaData.mappings().size());
for (ObjectCursor<MappingMetaData> cursor : indexMetaData.mappings().values()) {
MappingMetaData.writeTo(cursor.value, out);
}
out.writeVInt(indexMetaData.aliases().size());
for (ObjectCursor<AliasMetaData> cursor : indexMetaData.aliases().values()) {
AliasMetaData.Builder.writeTo(cursor.value, out);
}
out.writeVInt(indexMetaData.customs().size());
for (ObjectObjectCursor<String, Custom> cursor : indexMetaData.customs()) {
out.writeString(cursor.key);
lookupFactorySafe(cursor.key).writeTo(cursor.value, out);
}
return PROTO.readFrom(in);
}
}

View File

@ -21,7 +21,7 @@ package org.elasticsearch.cluster.metadata;
import com.carrotsearch.hppc.cursors.ObjectCursor;
import com.carrotsearch.hppc.cursors.ObjectObjectCursor;
import com.google.common.collect.Sets;
import org.elasticsearch.Version;
import org.elasticsearch.cluster.AbstractDiffable;
import org.elasticsearch.common.collect.ImmutableOpenMap;
import org.elasticsearch.common.collect.MapBuilder;
import org.elasticsearch.common.compress.CompressedString;
@ -42,7 +42,9 @@ import java.util.Set;
/**
*
*/
public class IndexTemplateMetaData {
public class IndexTemplateMetaData extends AbstractDiffable<IndexTemplateMetaData> {
public static final IndexTemplateMetaData PROTO = IndexTemplateMetaData.builder("").build();
private final String name;
@ -161,11 +163,57 @@ public class IndexTemplateMetaData {
return result;
}
@Override
public IndexTemplateMetaData readFrom(StreamInput in) throws IOException {
Builder builder = new Builder(in.readString());
builder.order(in.readInt());
builder.template(in.readString());
builder.settings(ImmutableSettings.readSettingsFromStream(in));
int mappingsSize = in.readVInt();
for (int i = 0; i < mappingsSize; i++) {
builder.putMapping(in.readString(), CompressedString.readCompressedString(in));
}
int aliasesSize = in.readVInt();
for (int i = 0; i < aliasesSize; i++) {
AliasMetaData aliasMd = AliasMetaData.Builder.readFrom(in);
builder.putAlias(aliasMd);
}
int customSize = in.readVInt();
for (int i = 0; i < customSize; i++) {
String type = in.readString();
IndexMetaData.Custom customIndexMetaData = IndexMetaData.lookupPrototypeSafe(type).readFrom(in);
builder.putCustom(type, customIndexMetaData);
}
return builder.build();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeString(name);
out.writeInt(order);
out.writeString(template);
ImmutableSettings.writeSettingsToStream(settings, out);
out.writeVInt(mappings.size());
for (ObjectObjectCursor<String, CompressedString> cursor : mappings) {
out.writeString(cursor.key);
cursor.value.writeTo(out);
}
out.writeVInt(aliases.size());
for (ObjectCursor<AliasMetaData> cursor : aliases.values()) {
cursor.value.writeTo(out);
}
out.writeVInt(customs.size());
for (ObjectObjectCursor<String, IndexMetaData.Custom> cursor : customs) {
out.writeString(cursor.key);
cursor.value.writeTo(out);
}
}
public static class Builder {
private static final Set<String> VALID_FIELDS = Sets.newHashSet("template", "order", "mappings", "settings");
static {
VALID_FIELDS.addAll(IndexMetaData.customFactories.keySet());
VALID_FIELDS.addAll(IndexMetaData.customPrototypes.keySet());
}
private String name;
@ -305,7 +353,7 @@ public class IndexTemplateMetaData {
for (ObjectObjectCursor<String, IndexMetaData.Custom> cursor : indexTemplateMetaData.customs()) {
builder.startObject(cursor.key, XContentBuilder.FieldCaseConversion.NONE);
IndexMetaData.lookupFactorySafe(cursor.key).toXContent(cursor.value, builder, params);
cursor.value.toXContent(builder, params);
builder.endObject();
}
@ -347,12 +395,13 @@ public class IndexTemplateMetaData {
}
} else {
// check if its a custom index metadata
IndexMetaData.Custom.Factory<IndexMetaData.Custom> factory = IndexMetaData.lookupFactory(currentFieldName);
if (factory == null) {
IndexMetaData.Custom proto = IndexMetaData.lookupPrototype(currentFieldName);
if (proto == null) {
//TODO warn
parser.skipChildren();
} else {
builder.putCustom(factory.type(), factory.fromXContent(parser));
IndexMetaData.Custom custom = proto.fromXContent(parser);
builder.putCustom(custom.type(), custom);
}
}
} else if (token == XContentParser.Token.START_ARRAY) {
@ -401,47 +450,7 @@ public class IndexTemplateMetaData {
}
public static IndexTemplateMetaData readFrom(StreamInput in) throws IOException {
Builder builder = new Builder(in.readString());
builder.order(in.readInt());
builder.template(in.readString());
builder.settings(ImmutableSettings.readSettingsFromStream(in));
int mappingsSize = in.readVInt();
for (int i = 0; i < mappingsSize; i++) {
builder.putMapping(in.readString(), CompressedString.readCompressedString(in));
}
int aliasesSize = in.readVInt();
for (int i = 0; i < aliasesSize; i++) {
AliasMetaData aliasMd = AliasMetaData.Builder.readFrom(in);
builder.putAlias(aliasMd);
}
int customSize = in.readVInt();
for (int i = 0; i < customSize; i++) {
String type = in.readString();
IndexMetaData.Custom customIndexMetaData = IndexMetaData.lookupFactorySafe(type).readFrom(in);
builder.putCustom(type, customIndexMetaData);
}
return builder.build();
}
public static void writeTo(IndexTemplateMetaData indexTemplateMetaData, StreamOutput out) throws IOException {
out.writeString(indexTemplateMetaData.name());
out.writeInt(indexTemplateMetaData.order());
out.writeString(indexTemplateMetaData.template());
ImmutableSettings.writeSettingsToStream(indexTemplateMetaData.settings(), out);
out.writeVInt(indexTemplateMetaData.mappings().size());
for (ObjectObjectCursor<String, CompressedString> cursor : indexTemplateMetaData.mappings()) {
out.writeString(cursor.key);
cursor.value.writeTo(out);
}
out.writeVInt(indexTemplateMetaData.aliases().size());
for (ObjectCursor<AliasMetaData> cursor : indexTemplateMetaData.aliases().values()) {
AliasMetaData.Builder.writeTo(cursor.value, out);
}
out.writeVInt(indexTemplateMetaData.customs().size());
for (ObjectObjectCursor<String, IndexMetaData.Custom> cursor : indexTemplateMetaData.customs()) {
out.writeString(cursor.key);
IndexMetaData.lookupFactorySafe(cursor.key).writeTo(cursor.value, out);
}
return PROTO.readFrom(in);
}
}

View File

@ -19,8 +19,10 @@
package org.elasticsearch.cluster.metadata;
import com.google.common.collect.Maps;
import org.elasticsearch.Version;
import org.elasticsearch.action.TimestampParsingException;
import org.elasticsearch.cluster.AbstractDiffable;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.compress.CompressedString;
@ -38,14 +40,18 @@ import org.elasticsearch.index.mapper.internal.TimestampFieldMapper;
import java.io.IOException;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Map;
import static com.google.common.collect.Maps.newHashMap;
import static org.elasticsearch.common.xcontent.support.XContentMapValues.nodeBooleanValue;
/**
* Mapping configuration for a type.
*/
public class MappingMetaData {
public class MappingMetaData extends AbstractDiffable<MappingMetaData> {
public static final MappingMetaData PROTO = new MappingMetaData();
public static class Id {
@ -317,6 +323,15 @@ public class MappingMetaData {
initMappers(withoutType);
}
private MappingMetaData() {
this.type = "";
try {
this.source = new CompressedString("");
} catch (IOException ex) {
throw new IllegalStateException("Cannot create MappingMetaData prototype", ex);
}
}
private void initMappers(Map<String, Object> withoutType) {
if (withoutType.containsKey("_id")) {
String path = null;
@ -532,34 +547,35 @@ public class MappingMetaData {
}
}
public static void writeTo(MappingMetaData mappingMd, StreamOutput out) throws IOException {
out.writeString(mappingMd.type());
mappingMd.source().writeTo(out);
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeString(type());
source().writeTo(out);
// id
if (mappingMd.id().hasPath()) {
if (id().hasPath()) {
out.writeBoolean(true);
out.writeString(mappingMd.id().path());
out.writeString(id().path());
} else {
out.writeBoolean(false);
}
// routing
out.writeBoolean(mappingMd.routing().required());
if (mappingMd.routing().hasPath()) {
out.writeBoolean(routing().required());
if (routing().hasPath()) {
out.writeBoolean(true);
out.writeString(mappingMd.routing().path());
out.writeString(routing().path());
} else {
out.writeBoolean(false);
}
// timestamp
out.writeBoolean(mappingMd.timestamp().enabled());
out.writeOptionalString(mappingMd.timestamp().path());
out.writeString(mappingMd.timestamp().format());
out.writeOptionalString(mappingMd.timestamp().defaultTimestamp());
out.writeBoolean(timestamp().enabled());
out.writeOptionalString(timestamp().path());
out.writeString(timestamp().format());
out.writeOptionalString(timestamp().defaultTimestamp());
// TODO Remove the test in elasticsearch 2.0.0
if (out.getVersion().onOrAfter(Version.V_1_5_0)) {
out.writeOptionalBoolean(mappingMd.timestamp().ignoreMissing());
out.writeOptionalBoolean(timestamp().ignoreMissing());
}
out.writeBoolean(mappingMd.hasParentField());
out.writeBoolean(hasParentField());
}
@Override
@ -588,7 +604,7 @@ public class MappingMetaData {
return result;
}
public static MappingMetaData readFrom(StreamInput in) throws IOException {
public MappingMetaData readFrom(StreamInput in) throws IOException {
String type = in.readString();
CompressedString source = CompressedString.readCompressedString(in);
// id

View File

@ -25,7 +25,9 @@ import com.carrotsearch.hppc.cursors.ObjectCursor;
import com.carrotsearch.hppc.cursors.ObjectObjectCursor;
import com.google.common.base.Predicate;
import com.google.common.collect.*;
import org.elasticsearch.cluster.*;
import org.elasticsearch.action.support.IndicesOptions;
import org.elasticsearch.cluster.DiffableUtils.KeyedReader;
import org.elasticsearch.cluster.block.ClusterBlock;
import org.elasticsearch.cluster.block.ClusterBlockLevel;
import org.elasticsearch.common.Nullable;
@ -55,7 +57,9 @@ import static org.elasticsearch.common.settings.ImmutableSettings.*;
/**
*
*/
public class MetaData implements Iterable<IndexMetaData> {
public class MetaData implements Iterable<IndexMetaData>, Diffable<MetaData> {
public static final MetaData PROTO = builder().build();
public static final String ALL = "_all";
@ -67,60 +71,51 @@ public class MetaData implements Iterable<IndexMetaData> {
GATEWAY,
/* Custom metadata should be stored as part of a snapshot */
SNAPSHOT;
SNAPSHOT
}
public static EnumSet<XContentContext> API_ONLY = EnumSet.of(XContentContext.API);
public static EnumSet<XContentContext> API_AND_GATEWAY = EnumSet.of(XContentContext.API, XContentContext.GATEWAY);
public static EnumSet<XContentContext> API_AND_SNAPSHOT = EnumSet.of(XContentContext.API, XContentContext.SNAPSHOT);
public interface Custom {
public interface Custom extends Diffable<Custom>, ToXContent {
abstract class Factory<T extends Custom> {
String type();
public abstract String type();
Custom fromXContent(XContentParser parser) throws IOException;
public abstract T readFrom(StreamInput in) throws IOException;
public abstract void writeTo(T customIndexMetaData, StreamOutput out) throws IOException;
public abstract T fromXContent(XContentParser parser) throws IOException;
public abstract void toXContent(T customIndexMetaData, XContentBuilder builder, ToXContent.Params params) throws IOException;
public EnumSet<XContentContext> context() {
return API_ONLY;
}
}
EnumSet<XContentContext> context();
}
public static Map<String, Custom.Factory> customFactories = new HashMap<>();
public static Map<String, Custom> customPrototypes = new HashMap<>();
static {
// register non plugin custom metadata
registerFactory(RepositoriesMetaData.TYPE, RepositoriesMetaData.FACTORY);
registerFactory(SnapshotMetaData.TYPE, SnapshotMetaData.FACTORY);
registerFactory(RestoreMetaData.TYPE, RestoreMetaData.FACTORY);
registerPrototype(RepositoriesMetaData.TYPE, RepositoriesMetaData.PROTO);
registerPrototype(SnapshotMetaData.TYPE, SnapshotMetaData.PROTO);
registerPrototype(RestoreMetaData.TYPE, RestoreMetaData.PROTO);
}
/**
* Register a custom index meta data factory. Make sure to call it from a static block.
*/
public static void registerFactory(String type, Custom.Factory factory) {
customFactories.put(type, factory);
public static void registerPrototype(String type, Custom proto) {
customPrototypes.put(type, proto);
}
@Nullable
public static <T extends Custom> Custom.Factory<T> lookupFactory(String type) {
return customFactories.get(type);
public static <T extends Custom> T lookupPrototype(String type) {
//noinspection unchecked
return (T) customPrototypes.get(type);
}
public static <T extends Custom> Custom.Factory<T> lookupFactorySafe(String type) {
Custom.Factory<T> factory = customFactories.get(type);
if (factory == null) {
throw new IllegalArgumentException("No custom index metadata factory registered for type [" + type + "]");
public static <T extends Custom> T lookupPrototypeSafe(String type) {
//noinspection unchecked
T proto = (T) customPrototypes.get(type);
if (proto == null) {
throw new IllegalArgumentException("No custom metadata prototype registered for type [" + type + "]");
}
return factory;
return proto;
}
@ -644,14 +639,22 @@ public class MetaData implements Iterable<IndexMetaData> {
/**
* Translates the provided indices or aliases, eventually containing wildcard expressions, into actual indices.
*
* @param indicesOptions how the aliases or indices need to be resolved to concrete indices
* @param indicesOptions how the aliases or indices need to be resolved to concrete indices
* @param aliasesOrIndices the aliases or indices to be resolved to concrete indices
* @return the obtained concrete indices
<<<<<<< HEAD
* @throws IndexMissingException if one of the aliases or indices is missing and the provided indices options
* don't allow such a case, or if the final result of the indices resolution is no indices and the indices options
* don't allow such a case.
* @throws IllegalArgumentException if one of the aliases resolve to multiple indices and the provided
* indices options don't allow such a case.
=======
* @throws IndexMissingException if one of the aliases or indices is missing and the provided indices options
* don't allow such a case, or if the final result of the indices resolution is no indices and the indices options
* don't allow such a case.
* @throws ElasticsearchIllegalArgumentException if one of the aliases resolve to multiple indices and the provided
* indices options don't allow such a case.
>>>>>>> Add support for cluster state diffs
*/
public String[] concreteIndices(IndicesOptions indicesOptions, String... aliasesOrIndices) throws IndexMissingException, IllegalArgumentException {
if (indicesOptions.expandWildcardsOpen() || indicesOptions.expandWildcardsClosed()) {
@ -1139,14 +1142,14 @@ public class MetaData implements Iterable<IndexMetaData> {
// Check if any persistent metadata needs to be saved
int customCount1 = 0;
for (ObjectObjectCursor<String, Custom> cursor : metaData1.customs) {
if (customFactories.get(cursor.key).context().contains(XContentContext.GATEWAY)) {
if (customPrototypes.get(cursor.key).context().contains(XContentContext.GATEWAY)) {
if (!cursor.value.equals(metaData2.custom(cursor.key))) return false;
customCount1++;
}
}
int customCount2 = 0;
for (ObjectObjectCursor<String, Custom> cursor : metaData2.customs) {
if (customFactories.get(cursor.key).context().contains(XContentContext.GATEWAY)) {
if (customPrototypes.get(cursor.key).context().contains(XContentContext.GATEWAY)) {
customCount2++;
}
}
@ -1154,6 +1157,129 @@ public class MetaData implements Iterable<IndexMetaData> {
return true;
}
@Override
public Diff<MetaData> diff(MetaData previousState) {
return new MetaDataDiff(previousState, this);
}
@Override
public Diff<MetaData> readDiffFrom(StreamInput in) throws IOException {
return new MetaDataDiff(in);
}
private static class MetaDataDiff implements Diff<MetaData> {
private long version;
private String uuid;
private Settings transientSettings;
private Settings persistentSettings;
private Diff<ImmutableOpenMap<String, IndexMetaData>> indices;
private Diff<ImmutableOpenMap<String, IndexTemplateMetaData>> templates;
private Diff<ImmutableOpenMap<String, Custom>> customs;
public MetaDataDiff(MetaData before, MetaData after) {
uuid = after.uuid;
version = after.version;
transientSettings = after.transientSettings;
persistentSettings = after.persistentSettings;
indices = DiffableUtils.diff(before.indices, after.indices);
templates = DiffableUtils.diff(before.templates, after.templates);
customs = DiffableUtils.diff(before.customs, after.customs);
}
public MetaDataDiff(StreamInput in) throws IOException {
uuid = in.readString();
version = in.readLong();
transientSettings = ImmutableSettings.readSettingsFromStream(in);
persistentSettings = ImmutableSettings.readSettingsFromStream(in);
indices = DiffableUtils.readImmutableOpenMapDiff(in, IndexMetaData.PROTO);
templates = DiffableUtils.readImmutableOpenMapDiff(in, IndexTemplateMetaData.PROTO);
customs = DiffableUtils.readImmutableOpenMapDiff(in, new KeyedReader<Custom>() {
@Override
public Custom readFrom(StreamInput in, String key) throws IOException {
return lookupPrototypeSafe(key).readFrom(in);
}
@Override
public Diff<Custom> readDiffFrom(StreamInput in, String key) throws IOException {
return lookupPrototypeSafe(key).readDiffFrom(in);
}
});
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeString(uuid);
out.writeLong(version);
ImmutableSettings.writeSettingsToStream(transientSettings, out);
ImmutableSettings.writeSettingsToStream(persistentSettings, out);
indices.writeTo(out);
templates.writeTo(out);
customs.writeTo(out);
}
@Override
public MetaData apply(MetaData part) {
Builder builder = builder();
builder.uuid(uuid);
builder.version(version);
builder.transientSettings(transientSettings);
builder.persistentSettings(persistentSettings);
builder.indices(indices.apply(part.indices));
builder.templates(templates.apply(part.templates));
builder.customs(customs.apply(part.customs));
return builder.build();
}
}
@Override
public MetaData readFrom(StreamInput in) throws IOException {
Builder builder = new Builder();
builder.version = in.readLong();
builder.uuid = in.readString();
builder.transientSettings(readSettingsFromStream(in));
builder.persistentSettings(readSettingsFromStream(in));
int size = in.readVInt();
for (int i = 0; i < size; i++) {
builder.put(IndexMetaData.Builder.readFrom(in), false);
}
size = in.readVInt();
for (int i = 0; i < size; i++) {
builder.put(IndexTemplateMetaData.Builder.readFrom(in));
}
int customSize = in.readVInt();
for (int i = 0; i < customSize; i++) {
String type = in.readString();
Custom customIndexMetaData = lookupPrototypeSafe(type).readFrom(in);
builder.putCustom(type, customIndexMetaData);
}
return builder.build();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeLong(version);
out.writeString(uuid);
writeSettingsToStream(transientSettings, out);
writeSettingsToStream(persistentSettings, out);
out.writeVInt(indices.size());
for (IndexMetaData indexMetaData : this) {
indexMetaData.writeTo(out);
}
out.writeVInt(templates.size());
for (ObjectCursor<IndexTemplateMetaData> cursor : templates.values()) {
cursor.value.writeTo(out);
}
out.writeVInt(customs.size());
for (ObjectObjectCursor<String, Custom> cursor : customs) {
out.writeString(cursor.key);
cursor.value.writeTo(out);
}
}
public static Builder builder() {
return new Builder();
}
@ -1225,6 +1351,11 @@ public class MetaData implements Iterable<IndexMetaData> {
return this;
}
public Builder indices(ImmutableOpenMap<String, IndexMetaData> indices) {
this.indices.putAll(indices);
return this;
}
public Builder put(IndexTemplateMetaData.Builder template) {
return put(template.build());
}
@ -1239,6 +1370,11 @@ public class MetaData implements Iterable<IndexMetaData> {
return this;
}
public Builder templates(ImmutableOpenMap<String, IndexTemplateMetaData> templates) {
this.templates.putAll(templates);
return this;
}
public Custom getCustom(String type) {
return customs.get(type);
}
@ -1253,6 +1389,11 @@ public class MetaData implements Iterable<IndexMetaData> {
return this;
}
public Builder customs(ImmutableOpenMap<String, Custom> customs) {
this.customs.putAll(customs);
return this;
}
public Builder updateSettings(Settings settings, String... indices) {
if (indices == null || indices.length == 0) {
indices = this.indices.keys().toArray(String.class);
@ -1305,6 +1446,11 @@ public class MetaData implements Iterable<IndexMetaData> {
return this;
}
public Builder uuid(String uuid) {
this.uuid = uuid;
return this;
}
public Builder generateUuidIfNeeded() {
if (uuid.equals("_na_")) {
uuid = Strings.randomBase64UUID();
@ -1363,10 +1509,10 @@ public class MetaData implements Iterable<IndexMetaData> {
}
for (ObjectObjectCursor<String, Custom> cursor : metaData.customs()) {
Custom.Factory factory = lookupFactorySafe(cursor.key);
if (factory.context().contains(context)) {
Custom proto = lookupPrototypeSafe(cursor.key);
if (proto.context().contains(context)) {
builder.startObject(cursor.key);
factory.toXContent(cursor.value, builder, params);
cursor.value.toXContent(builder, params);
builder.endObject();
}
}
@ -1410,12 +1556,13 @@ public class MetaData implements Iterable<IndexMetaData> {
}
} else {
// check if its a custom index metadata
Custom.Factory<Custom> factory = lookupFactory(currentFieldName);
if (factory == null) {
Custom proto = lookupPrototype(currentFieldName);
if (proto == null) {
//TODO warn
parser.skipChildren();
} else {
builder.putCustom(factory.type(), factory.fromXContent(parser));
Custom custom = proto.fromXContent(parser);
builder.putCustom(custom.type(), custom);
}
}
} else if (token.isValue()) {
@ -1430,46 +1577,7 @@ public class MetaData implements Iterable<IndexMetaData> {
}
public static MetaData readFrom(StreamInput in) throws IOException {
Builder builder = new Builder();
builder.version = in.readLong();
builder.uuid = in.readString();
builder.transientSettings(readSettingsFromStream(in));
builder.persistentSettings(readSettingsFromStream(in));
int size = in.readVInt();
for (int i = 0; i < size; i++) {
builder.put(IndexMetaData.Builder.readFrom(in), false);
}
size = in.readVInt();
for (int i = 0; i < size; i++) {
builder.put(IndexTemplateMetaData.Builder.readFrom(in));
}
int customSize = in.readVInt();
for (int i = 0; i < customSize; i++) {
String type = in.readString();
Custom customIndexMetaData = lookupFactorySafe(type).readFrom(in);
builder.putCustom(type, customIndexMetaData);
}
return builder.build();
}
public static void writeTo(MetaData metaData, StreamOutput out) throws IOException {
out.writeLong(metaData.version);
out.writeString(metaData.uuid);
writeSettingsToStream(metaData.transientSettings(), out);
writeSettingsToStream(metaData.persistentSettings(), out);
out.writeVInt(metaData.indices.size());
for (IndexMetaData indexMetaData : metaData) {
IndexMetaData.Builder.writeTo(indexMetaData, out);
}
out.writeVInt(metaData.templates.size());
for (ObjectCursor<IndexTemplateMetaData> cursor : metaData.templates.values()) {
IndexTemplateMetaData.Builder.writeTo(cursor.value, out);
}
out.writeVInt(metaData.customs().size());
for (ObjectObjectCursor<String, Custom> cursor : metaData.customs()) {
out.writeString(cursor.key);
lookupFactorySafe(cursor.key).writeTo(cursor.value, out);
}
return PROTO.readFrom(in);
}
}
}

View File

@ -272,7 +272,7 @@ public class MetaDataCreateIndexService extends AbstractComponent {
if (existing == null) {
customs.put(type, custom);
} else {
IndexMetaData.Custom merged = IndexMetaData.lookupFactorySafe(type).merge(existing, custom);
IndexMetaData.Custom merged = existing.mergeWith(custom);
customs.put(type, merged);
}
}

View File

@ -21,6 +21,8 @@ package org.elasticsearch.cluster.metadata;
import com.google.common.collect.ImmutableList;
import org.elasticsearch.ElasticsearchParseException;
import org.elasticsearch.cluster.AbstractDiffable;
import org.elasticsearch.cluster.metadata.MetaData.Custom;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.settings.ImmutableSettings;
@ -39,11 +41,11 @@ import java.util.Map;
/**
* Contains metadata about registered snapshot repositories
*/
public class RepositoriesMetaData implements MetaData.Custom {
public class RepositoriesMetaData extends AbstractDiffable<Custom> implements MetaData.Custom {
public static final String TYPE = "repositories";
public static final Factory FACTORY = new Factory();
public static final RepositoriesMetaData PROTO = new RepositoriesMetaData();
private final ImmutableList<RepositoryMetaData> repositories;
@ -80,122 +82,132 @@ public class RepositoriesMetaData implements MetaData.Custom {
return null;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
RepositoriesMetaData that = (RepositoriesMetaData) o;
return repositories.equals(that.repositories);
}
@Override
public int hashCode() {
return repositories.hashCode();
}
/**
* Repository metadata factory
* {@inheritDoc}
*/
public static class Factory extends MetaData.Custom.Factory<RepositoriesMetaData> {
@Override
public String type() {
return TYPE;
}
/**
* {@inheritDoc}
*/
@Override
public String type() {
return TYPE;
/**
* {@inheritDoc}
*/
@Override
public Custom readFrom(StreamInput in) throws IOException {
RepositoryMetaData[] repository = new RepositoryMetaData[in.readVInt()];
for (int i = 0; i < repository.length; i++) {
repository[i] = RepositoryMetaData.readFrom(in);
}
return new RepositoriesMetaData(repository);
}
/**
* {@inheritDoc}
*/
@Override
public RepositoriesMetaData readFrom(StreamInput in) throws IOException {
RepositoryMetaData[] repository = new RepositoryMetaData[in.readVInt()];
for (int i = 0; i < repository.length; i++) {
repository[i] = RepositoryMetaData.readFrom(in);
}
return new RepositoriesMetaData(repository);
}
/**
* {@inheritDoc}
*/
@Override
public void writeTo(RepositoriesMetaData repositories, StreamOutput out) throws IOException {
out.writeVInt(repositories.repositories().size());
for (RepositoryMetaData repository : repositories.repositories()) {
repository.writeTo(out);
}
}
/**
* {@inheritDoc}
*/
@Override
public RepositoriesMetaData fromXContent(XContentParser parser) throws IOException {
XContentParser.Token token;
List<RepositoryMetaData> repository = new ArrayList<>();
while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {
if (token == XContentParser.Token.FIELD_NAME) {
String name = parser.currentName();
if (parser.nextToken() != XContentParser.Token.START_OBJECT) {
throw new ElasticsearchParseException("failed to parse repository [" + name + "], expected object");
}
String type = null;
Settings settings = ImmutableSettings.EMPTY;
while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {
if (token == XContentParser.Token.FIELD_NAME) {
String currentFieldName = parser.currentName();
if ("type".equals(currentFieldName)) {
if (parser.nextToken() != XContentParser.Token.VALUE_STRING) {
throw new ElasticsearchParseException("failed to parse repository [" + name + "], unknown type");
}
type = parser.text();
} else if ("settings".equals(currentFieldName)) {
if (parser.nextToken() != XContentParser.Token.START_OBJECT) {
throw new ElasticsearchParseException("failed to parse repository [" + name + "], incompatible params");
}
settings = ImmutableSettings.settingsBuilder().put(SettingsLoader.Helper.loadNestedFromMap(parser.mapOrdered())).build();
} else {
throw new ElasticsearchParseException("failed to parse repository [" + name + "], unknown field [" + currentFieldName + "]");
}
} else {
throw new ElasticsearchParseException("failed to parse repository [" + name + "]");
}
}
if (type == null) {
throw new ElasticsearchParseException("failed to parse repository [" + name + "], missing repository type");
}
repository.add(new RepositoryMetaData(name, type, settings));
} else {
throw new ElasticsearchParseException("failed to parse repositories");
}
}
return new RepositoriesMetaData(repository.toArray(new RepositoryMetaData[repository.size()]));
}
/**
* {@inheritDoc}
*/
@Override
public void toXContent(RepositoriesMetaData customIndexMetaData, XContentBuilder builder, ToXContent.Params params) throws IOException {
for (RepositoryMetaData repository : customIndexMetaData.repositories()) {
toXContent(repository, builder, params);
}
}
@Override
public EnumSet<MetaData.XContentContext> context() {
return MetaData.API_AND_GATEWAY;
}
/**
* Serializes information about a single repository
*
* @param repository repository metadata
* @param builder XContent builder
* @param params serialization parameters
* @throws IOException
*/
public void toXContent(RepositoryMetaData repository, XContentBuilder builder, ToXContent.Params params) throws IOException {
builder.startObject(repository.name(), XContentBuilder.FieldCaseConversion.NONE);
builder.field("type", repository.type());
builder.startObject("settings");
for (Map.Entry<String, String> settingEntry : repository.settings().getAsMap().entrySet()) {
builder.field(settingEntry.getKey(), settingEntry.getValue());
}
builder.endObject();
builder.endObject();
/**
* {@inheritDoc}
*/
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeVInt(repositories.size());
for (RepositoryMetaData repository : repositories) {
repository.writeTo(out);
}
}
/**
* {@inheritDoc}
*/
@Override
public RepositoriesMetaData fromXContent(XContentParser parser) throws IOException {
XContentParser.Token token;
List<RepositoryMetaData> repository = new ArrayList<>();
while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {
if (token == XContentParser.Token.FIELD_NAME) {
String name = parser.currentName();
if (parser.nextToken() != XContentParser.Token.START_OBJECT) {
throw new ElasticsearchParseException("failed to parse repository [" + name + "], expected object");
}
String type = null;
Settings settings = ImmutableSettings.EMPTY;
while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {
if (token == XContentParser.Token.FIELD_NAME) {
String currentFieldName = parser.currentName();
if ("type".equals(currentFieldName)) {
if (parser.nextToken() != XContentParser.Token.VALUE_STRING) {
throw new ElasticsearchParseException("failed to parse repository [" + name + "], unknown type");
}
type = parser.text();
} else if ("settings".equals(currentFieldName)) {
if (parser.nextToken() != XContentParser.Token.START_OBJECT) {
throw new ElasticsearchParseException("failed to parse repository [" + name + "], incompatible params");
}
settings = ImmutableSettings.settingsBuilder().put(SettingsLoader.Helper.loadNestedFromMap(parser.mapOrdered())).build();
} else {
throw new ElasticsearchParseException("failed to parse repository [" + name + "], unknown field [" + currentFieldName + "]");
}
} else {
throw new ElasticsearchParseException("failed to parse repository [" + name + "]");
}
}
if (type == null) {
throw new ElasticsearchParseException("failed to parse repository [" + name + "], missing repository type");
}
repository.add(new RepositoryMetaData(name, type, settings));
} else {
throw new ElasticsearchParseException("failed to parse repositories");
}
}
return new RepositoriesMetaData(repository.toArray(new RepositoryMetaData[repository.size()]));
}
/**
* {@inheritDoc}
*/
@Override
public XContentBuilder toXContent(XContentBuilder builder, ToXContent.Params params) throws IOException {
for (RepositoryMetaData repository : repositories) {
toXContent(repository, builder, params);
}
return builder;
}
@Override
public EnumSet<MetaData.XContentContext> context() {
return MetaData.API_AND_GATEWAY;
}
/**
* Serializes information about a single repository
*
* @param repository repository metadata
* @param builder XContent builder
* @param params serialization parameters
* @throws IOException
*/
public static void toXContent(RepositoryMetaData repository, XContentBuilder builder, ToXContent.Params params) throws IOException {
builder.startObject(repository.name(), XContentBuilder.FieldCaseConversion.NONE);
builder.field("type", repository.type());
builder.startObject("settings");
for (Map.Entry<String, String> settingEntry : repository.settings().getAsMap().entrySet()) {
builder.field(settingEntry.getKey(), settingEntry.getValue());
}
builder.endObject();
builder.endObject();
}
}

View File

@ -99,4 +99,25 @@ public class RepositoryMetaData {
out.writeString(type);
ImmutableSettings.writeSettingsToStream(settings, out);
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
RepositoryMetaData that = (RepositoryMetaData) o;
if (!name.equals(that.name)) return false;
if (!type.equals(that.type)) return false;
return settings.equals(that.settings);
}
@Override
public int hashCode() {
int result = name.hashCode();
result = 31 * result + type.hashCode();
result = 31 * result + settings.hashCode();
return result;
}
}

View File

@ -21,6 +21,7 @@ package org.elasticsearch.cluster.metadata;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMap;
import org.elasticsearch.cluster.AbstractDiffable;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.ToXContent;
@ -29,16 +30,17 @@ import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.index.shard.ShardId;
import java.io.IOException;
import java.util.EnumSet;
import java.util.Map;
/**
* Meta data about restore processes that are currently executing
*/
public class RestoreMetaData implements MetaData.Custom {
public class RestoreMetaData extends AbstractDiffable<MetaData.Custom> implements MetaData.Custom {
public static final String TYPE = "restore";
public static final Factory FACTORY = new Factory();
public static final RestoreMetaData PROTO = new RestoreMetaData();
private final ImmutableList<Entry> entries;
@ -394,124 +396,122 @@ public class RestoreMetaData implements MetaData.Custom {
}
/**
* Restore metadata factory
* {@inheritDoc}
*/
public static class Factory extends MetaData.Custom.Factory<RestoreMetaData> {
@Override
public String type() {
return TYPE;
}
/**
* {@inheritDoc}
*/
@Override
public String type() {
return TYPE;
}
/**
* {@inheritDoc}
*/
@Override
public RestoreMetaData readFrom(StreamInput in) throws IOException {
Entry[] entries = new Entry[in.readVInt()];
for (int i = 0; i < entries.length; i++) {
SnapshotId snapshotId = SnapshotId.readSnapshotId(in);
State state = State.fromValue(in.readByte());
int indices = in.readVInt();
ImmutableList.Builder<String> indexBuilder = ImmutableList.builder();
for (int j = 0; j < indices; j++) {
indexBuilder.add(in.readString());
}
ImmutableMap.Builder<ShardId, ShardRestoreStatus> builder = ImmutableMap.<ShardId, ShardRestoreStatus>builder();
int shards = in.readVInt();
for (int j = 0; j < shards; j++) {
ShardId shardId = ShardId.readShardId(in);
ShardRestoreStatus shardState = ShardRestoreStatus.readShardRestoreStatus(in);
builder.put(shardId, shardState);
}
entries[i] = new Entry(snapshotId, state, indexBuilder.build(), builder.build());
/**
* {@inheritDoc}
*/
@Override
public RestoreMetaData readFrom(StreamInput in) throws IOException {
Entry[] entries = new Entry[in.readVInt()];
for (int i = 0; i < entries.length; i++) {
SnapshotId snapshotId = SnapshotId.readSnapshotId(in);
State state = State.fromValue(in.readByte());
int indices = in.readVInt();
ImmutableList.Builder<String> indexBuilder = ImmutableList.builder();
for (int j = 0; j < indices; j++) {
indexBuilder.add(in.readString());
}
return new RestoreMetaData(entries);
}
/**
* {@inheritDoc}
*/
@Override
public void writeTo(RestoreMetaData repositories, StreamOutput out) throws IOException {
out.writeVInt(repositories.entries().size());
for (Entry entry : repositories.entries()) {
entry.snapshotId().writeTo(out);
out.writeByte(entry.state().value());
out.writeVInt(entry.indices().size());
for (String index : entry.indices()) {
out.writeString(index);
}
out.writeVInt(entry.shards().size());
for (Map.Entry<ShardId, ShardRestoreStatus> shardEntry : entry.shards().entrySet()) {
shardEntry.getKey().writeTo(out);
shardEntry.getValue().writeTo(out);
}
ImmutableMap.Builder<ShardId, ShardRestoreStatus> builder = ImmutableMap.<ShardId, ShardRestoreStatus>builder();
int shards = in.readVInt();
for (int j = 0; j < shards; j++) {
ShardId shardId = ShardId.readShardId(in);
ShardRestoreStatus shardState = ShardRestoreStatus.readShardRestoreStatus(in);
builder.put(shardId, shardState);
}
entries[i] = new Entry(snapshotId, state, indexBuilder.build(), builder.build());
}
return new RestoreMetaData(entries);
}
/**
* {@inheritDoc}
*/
@Override
public RestoreMetaData fromXContent(XContentParser parser) throws IOException {
throw new UnsupportedOperationException();
}
/**
* {@inheritDoc}
*/
@Override
public void toXContent(RestoreMetaData customIndexMetaData, XContentBuilder builder, ToXContent.Params params) throws IOException {
builder.startArray("snapshots");
for (Entry entry : customIndexMetaData.entries()) {
toXContent(entry, builder, params);
/**
* {@inheritDoc}
*/
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeVInt(entries.size());
for (Entry entry : entries) {
entry.snapshotId().writeTo(out);
out.writeByte(entry.state().value());
out.writeVInt(entry.indices().size());
for (String index : entry.indices()) {
out.writeString(index);
}
builder.endArray();
}
/**
* Serializes single restore operation
*
* @param entry restore operation metadata
* @param builder XContent builder
* @param params serialization parameters
* @throws IOException
*/
public void toXContent(Entry entry, XContentBuilder builder, ToXContent.Params params) throws IOException {
builder.startObject();
builder.field("snapshot", entry.snapshotId().getSnapshot());
builder.field("repository", entry.snapshotId().getRepository());
builder.field("state", entry.state());
builder.startArray("indices");
{
for (String index : entry.indices()) {
builder.value(index);
}
out.writeVInt(entry.shards().size());
for (Map.Entry<ShardId, ShardRestoreStatus> shardEntry : entry.shards().entrySet()) {
shardEntry.getKey().writeTo(out);
shardEntry.getValue().writeTo(out);
}
builder.endArray();
builder.startArray("shards");
{
for (Map.Entry<ShardId, ShardRestoreStatus> shardEntry : entry.shards.entrySet()) {
ShardId shardId = shardEntry.getKey();
ShardRestoreStatus status = shardEntry.getValue();
builder.startObject();
{
builder.field("index", shardId.getIndex());
builder.field("shard", shardId.getId());
builder.field("state", status.state());
}
builder.endObject();
}
}
builder.endArray();
builder.endObject();
}
}
/**
* {@inheritDoc}
*/
@Override
public RestoreMetaData fromXContent(XContentParser parser) throws IOException {
throw new UnsupportedOperationException();
}
@Override
public EnumSet<MetaData.XContentContext> context() {
return MetaData.API_ONLY;
}
/**
* {@inheritDoc}
*/
@Override
public XContentBuilder toXContent(XContentBuilder builder, ToXContent.Params params) throws IOException {
builder.startArray("snapshots");
for (Entry entry : entries) {
toXContent(entry, builder, params);
}
builder.endArray();
return builder;
}
/**
* Serializes single restore operation
*
* @param entry restore operation metadata
* @param builder XContent builder
* @param params serialization parameters
* @throws IOException
*/
public void toXContent(Entry entry, XContentBuilder builder, ToXContent.Params params) throws IOException {
builder.startObject();
builder.field("snapshot", entry.snapshotId().getSnapshot());
builder.field("repository", entry.snapshotId().getRepository());
builder.field("state", entry.state());
builder.startArray("indices");
{
for (String index : entry.indices()) {
builder.value(index);
}
}
builder.endArray();
builder.startArray("shards");
{
for (Map.Entry<ShardId, ShardRestoreStatus> shardEntry : entry.shards.entrySet()) {
ShardId shardId = shardEntry.getKey();
ShardRestoreStatus status = shardEntry.getValue();
builder.startObject();
{
builder.field("index", shardId.getIndex());
builder.field("shard", shardId.getId());
builder.field("state", status.state());
}
builder.endObject();
}
}
builder.endArray();
builder.endObject();
}
}

View File

@ -21,6 +21,8 @@ package org.elasticsearch.cluster.metadata;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMap;
import org.elasticsearch.cluster.AbstractDiffable;
import org.elasticsearch.cluster.metadata.MetaData.Custom;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.ToXContent;
@ -30,6 +32,7 @@ import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.index.shard.ShardId;
import java.io.IOException;
import java.util.EnumSet;
import java.util.Map;
import static com.google.common.collect.Maps.newHashMap;
@ -37,10 +40,10 @@ import static com.google.common.collect.Maps.newHashMap;
/**
* Meta data about snapshots that are currently executing
*/
public class SnapshotMetaData implements MetaData.Custom {
public class SnapshotMetaData extends AbstractDiffable<Custom> implements MetaData.Custom {
public static final String TYPE = "snapshots";
public static final Factory FACTORY = new Factory();
public static final SnapshotMetaData PROTO = new SnapshotMetaData();
@Override
public boolean equals(Object o) {
@ -329,123 +332,123 @@ public class SnapshotMetaData implements MetaData.Custom {
return null;
}
@Override
public String type() {
return TYPE;
}
public static class Factory extends MetaData.Custom.Factory<SnapshotMetaData> {
@Override
public String type() {
return TYPE; //To change body of implemented methods use File | Settings | File Templates.
}
@Override
public SnapshotMetaData readFrom(StreamInput in) throws IOException {
Entry[] entries = new Entry[in.readVInt()];
for (int i = 0; i < entries.length; i++) {
SnapshotId snapshotId = SnapshotId.readSnapshotId(in);
boolean includeGlobalState = in.readBoolean();
State state = State.fromValue(in.readByte());
int indices = in.readVInt();
ImmutableList.Builder<String> indexBuilder = ImmutableList.builder();
for (int j = 0; j < indices; j++) {
indexBuilder.add(in.readString());
}
long startTime = in.readLong();
ImmutableMap.Builder<ShardId, ShardSnapshotStatus> builder = ImmutableMap.builder();
int shards = in.readVInt();
for (int j = 0; j < shards; j++) {
ShardId shardId = ShardId.readShardId(in);
String nodeId = in.readOptionalString();
State shardState = State.fromValue(in.readByte());
builder.put(shardId, new ShardSnapshotStatus(nodeId, shardState));
}
entries[i] = new Entry(snapshotId, includeGlobalState, state, indexBuilder.build(), startTime, builder.build());
@Override
public SnapshotMetaData readFrom(StreamInput in) throws IOException {
Entry[] entries = new Entry[in.readVInt()];
for (int i = 0; i < entries.length; i++) {
SnapshotId snapshotId = SnapshotId.readSnapshotId(in);
boolean includeGlobalState = in.readBoolean();
State state = State.fromValue(in.readByte());
int indices = in.readVInt();
ImmutableList.Builder<String> indexBuilder = ImmutableList.builder();
for (int j = 0; j < indices; j++) {
indexBuilder.add(in.readString());
}
return new SnapshotMetaData(entries);
}
@Override
public void writeTo(SnapshotMetaData repositories, StreamOutput out) throws IOException {
out.writeVInt(repositories.entries().size());
for (Entry entry : repositories.entries()) {
entry.snapshotId().writeTo(out);
out.writeBoolean(entry.includeGlobalState());
out.writeByte(entry.state().value());
out.writeVInt(entry.indices().size());
for (String index : entry.indices()) {
out.writeString(index);
}
out.writeLong(entry.startTime());
out.writeVInt(entry.shards().size());
for (Map.Entry<ShardId, ShardSnapshotStatus> shardEntry : entry.shards().entrySet()) {
shardEntry.getKey().writeTo(out);
out.writeOptionalString(shardEntry.getValue().nodeId());
out.writeByte(shardEntry.getValue().state().value());
}
long startTime = in.readLong();
ImmutableMap.Builder<ShardId, ShardSnapshotStatus> builder = ImmutableMap.builder();
int shards = in.readVInt();
for (int j = 0; j < shards; j++) {
ShardId shardId = ShardId.readShardId(in);
String nodeId = in.readOptionalString();
State shardState = State.fromValue(in.readByte());
builder.put(shardId, new ShardSnapshotStatus(nodeId, shardState));
}
entries[i] = new Entry(snapshotId, includeGlobalState, state, indexBuilder.build(), startTime, builder.build());
}
return new SnapshotMetaData(entries);
}
@Override
public SnapshotMetaData fromXContent(XContentParser parser) throws IOException {
throw new UnsupportedOperationException();
}
static final class Fields {
static final XContentBuilderString REPOSITORY = new XContentBuilderString("repository");
static final XContentBuilderString SNAPSHOTS = new XContentBuilderString("snapshots");
static final XContentBuilderString SNAPSHOT = new XContentBuilderString("snapshot");
static final XContentBuilderString INCLUDE_GLOBAL_STATE = new XContentBuilderString("include_global_state");
static final XContentBuilderString STATE = new XContentBuilderString("state");
static final XContentBuilderString INDICES = new XContentBuilderString("indices");
static final XContentBuilderString START_TIME_MILLIS = new XContentBuilderString("start_time_millis");
static final XContentBuilderString START_TIME = new XContentBuilderString("start_time");
static final XContentBuilderString SHARDS = new XContentBuilderString("shards");
static final XContentBuilderString INDEX = new XContentBuilderString("index");
static final XContentBuilderString SHARD = new XContentBuilderString("shard");
static final XContentBuilderString NODE = new XContentBuilderString("node");
}
@Override
public void toXContent(SnapshotMetaData customIndexMetaData, XContentBuilder builder, ToXContent.Params params) throws IOException {
builder.startArray(Fields.SNAPSHOTS);
for (Entry entry : customIndexMetaData.entries()) {
toXContent(entry, builder, params);
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeVInt(entries.size());
for (Entry entry : entries) {
entry.snapshotId().writeTo(out);
out.writeBoolean(entry.includeGlobalState());
out.writeByte(entry.state().value());
out.writeVInt(entry.indices().size());
for (String index : entry.indices()) {
out.writeString(index);
}
builder.endArray();
}
public void toXContent(Entry entry, XContentBuilder builder, ToXContent.Params params) throws IOException {
builder.startObject();
builder.field(Fields.REPOSITORY, entry.snapshotId().getRepository());
builder.field(Fields.SNAPSHOT, entry.snapshotId().getSnapshot());
builder.field(Fields.INCLUDE_GLOBAL_STATE, entry.includeGlobalState());
builder.field(Fields.STATE, entry.state());
builder.startArray(Fields.INDICES);
{
for (String index : entry.indices()) {
builder.value(index);
}
out.writeLong(entry.startTime());
out.writeVInt(entry.shards().size());
for (Map.Entry<ShardId, ShardSnapshotStatus> shardEntry : entry.shards().entrySet()) {
shardEntry.getKey().writeTo(out);
out.writeOptionalString(shardEntry.getValue().nodeId());
out.writeByte(shardEntry.getValue().state().value());
}
builder.endArray();
builder.timeValueField(Fields.START_TIME_MILLIS, Fields.START_TIME, entry.startTime());
builder.startArray(Fields.SHARDS);
{
for (Map.Entry<ShardId, ShardSnapshotStatus> shardEntry : entry.shards.entrySet()) {
ShardId shardId = shardEntry.getKey();
ShardSnapshotStatus status = shardEntry.getValue();
builder.startObject();
{
builder.field(Fields.INDEX, shardId.getIndex());
builder.field(Fields.SHARD, shardId.getId());
builder.field(Fields.STATE, status.state());
builder.field(Fields.NODE, status.nodeId());
}
builder.endObject();
}
}
builder.endArray();
builder.endObject();
}
}
@Override
public SnapshotMetaData fromXContent(XContentParser parser) throws IOException {
throw new UnsupportedOperationException();
}
@Override
public EnumSet<MetaData.XContentContext> context() {
return MetaData.API_ONLY;
}
static final class Fields {
static final XContentBuilderString REPOSITORY = new XContentBuilderString("repository");
static final XContentBuilderString SNAPSHOTS = new XContentBuilderString("snapshots");
static final XContentBuilderString SNAPSHOT = new XContentBuilderString("snapshot");
static final XContentBuilderString INCLUDE_GLOBAL_STATE = new XContentBuilderString("include_global_state");
static final XContentBuilderString STATE = new XContentBuilderString("state");
static final XContentBuilderString INDICES = new XContentBuilderString("indices");
static final XContentBuilderString START_TIME_MILLIS = new XContentBuilderString("start_time_millis");
static final XContentBuilderString START_TIME = new XContentBuilderString("start_time");
static final XContentBuilderString SHARDS = new XContentBuilderString("shards");
static final XContentBuilderString INDEX = new XContentBuilderString("index");
static final XContentBuilderString SHARD = new XContentBuilderString("shard");
static final XContentBuilderString NODE = new XContentBuilderString("node");
}
@Override
public XContentBuilder toXContent(XContentBuilder builder, ToXContent.Params params) throws IOException {
builder.startArray(Fields.SNAPSHOTS);
for (Entry entry : entries) {
toXContent(entry, builder, params);
}
builder.endArray();
return builder;
}
public void toXContent(Entry entry, XContentBuilder builder, ToXContent.Params params) throws IOException {
builder.startObject();
builder.field(Fields.REPOSITORY, entry.snapshotId().getRepository());
builder.field(Fields.SNAPSHOT, entry.snapshotId().getSnapshot());
builder.field(Fields.INCLUDE_GLOBAL_STATE, entry.includeGlobalState());
builder.field(Fields.STATE, entry.state());
builder.startArray(Fields.INDICES);
{
for (String index : entry.indices()) {
builder.value(index);
}
}
builder.endArray();
builder.timeValueField(Fields.START_TIME_MILLIS, Fields.START_TIME, entry.startTime());
builder.startArray(Fields.SHARDS);
{
for (Map.Entry<ShardId, ShardSnapshotStatus> shardEntry : entry.shards.entrySet()) {
ShardId shardId = shardEntry.getKey();
ShardSnapshotStatus status = shardEntry.getValue();
builder.startObject();
{
builder.field(Fields.INDEX, shardId.getIndex());
builder.field(Fields.SHARD, shardId.getId());
builder.field(Fields.STATE, status.state());
builder.field(Fields.NODE, status.nodeId());
}
builder.endObject();
}
}
builder.endArray();
builder.endObject();
}
}

View File

@ -25,6 +25,7 @@ import com.carrotsearch.hppc.cursors.ObjectObjectCursor;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.UnmodifiableIterator;
import org.elasticsearch.Version;
import org.elasticsearch.cluster.AbstractDiffable;
import org.elasticsearch.common.Booleans;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.collect.ImmutableOpenMap;
@ -44,9 +45,10 @@ import static com.google.common.collect.Lists.newArrayList;
* This class holds all {@link DiscoveryNode} in the cluster and provides convenience methods to
* access, modify merge / diff discovery nodes.
*/
public class DiscoveryNodes implements Iterable<DiscoveryNode> {
public class DiscoveryNodes extends AbstractDiffable<DiscoveryNodes> implements Iterable<DiscoveryNode> {
public static final DiscoveryNodes EMPTY_NODES = builder().build();
public static final DiscoveryNodes PROTO = EMPTY_NODES;
private final ImmutableOpenMap<String, DiscoveryNode> nodes;
private final ImmutableOpenMap<String, DiscoveryNode> dataNodes;
@ -567,6 +569,44 @@ public class DiscoveryNodes implements Iterable<DiscoveryNode> {
}
}
public void writeTo(StreamOutput out) throws IOException {
if (masterNodeId == null) {
out.writeBoolean(false);
} else {
out.writeBoolean(true);
out.writeString(masterNodeId);
}
out.writeVInt(nodes.size());
for (DiscoveryNode node : this) {
node.writeTo(out);
}
}
public DiscoveryNodes readFrom(StreamInput in, DiscoveryNode localNode) throws IOException {
Builder builder = new Builder();
if (in.readBoolean()) {
builder.masterNodeId(in.readString());
}
if (localNode != null) {
builder.localNodeId(localNode.id());
}
int size = in.readVInt();
for (int i = 0; i < size; i++) {
DiscoveryNode node = DiscoveryNode.readNode(in);
if (localNode != null && node.id().equals(localNode.id())) {
// reuse the same instance of our address and local node id for faster equality
node = localNode;
}
builder.put(node);
}
return builder.build();
}
@Override
public DiscoveryNodes readFrom(StreamInput in) throws IOException {
return readFrom(in, localNode());
}
public static Builder builder() {
return new Builder();
}
@ -631,37 +671,8 @@ public class DiscoveryNodes implements Iterable<DiscoveryNode> {
return new DiscoveryNodes(nodes.build(), dataNodesBuilder.build(), masterNodesBuilder.build(), masterNodeId, localNodeId, minNodeVersion, minNonClientNodeVersion);
}
public static void writeTo(DiscoveryNodes nodes, StreamOutput out) throws IOException {
if (nodes.masterNodeId() == null) {
out.writeBoolean(false);
} else {
out.writeBoolean(true);
out.writeString(nodes.masterNodeId);
}
out.writeVInt(nodes.size());
for (DiscoveryNode node : nodes) {
node.writeTo(out);
}
}
public static DiscoveryNodes readFrom(StreamInput in, @Nullable DiscoveryNode localNode) throws IOException {
Builder builder = new Builder();
if (in.readBoolean()) {
builder.masterNodeId(in.readString());
}
if (localNode != null) {
builder.localNodeId(localNode.id());
}
int size = in.readVInt();
for (int i = 0; i < size; i++) {
DiscoveryNode node = DiscoveryNode.readNode(in);
if (localNode != null && node.id().equals(localNode.id())) {
// reuse the same instance of our address and local node id for faster equality
node = localNode;
}
builder.put(node);
}
return builder.build();
return PROTO.readFrom(in, localNode);
}
}
}

View File

@ -25,6 +25,7 @@ import com.carrotsearch.hppc.cursors.IntObjectCursor;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.Sets;
import com.google.common.collect.UnmodifiableIterator;
import org.elasticsearch.cluster.AbstractDiffable;
import org.elasticsearch.cluster.metadata.IndexMetaData;
import org.elasticsearch.cluster.metadata.MetaData;
import org.elasticsearch.common.collect.ImmutableOpenIntMap;
@ -55,7 +56,9 @@ import static com.google.common.collect.Lists.newArrayList;
* represented as {@link ShardRouting}.
* </p>
*/
public class IndexRoutingTable implements Iterable<IndexShardRoutingTable> {
public class IndexRoutingTable extends AbstractDiffable<IndexRoutingTable> implements Iterable<IndexShardRoutingTable> {
public static final IndexRoutingTable PROTO = builder("").build();
private final String index;
private final ShardShuffler shuffler;
@ -314,9 +317,51 @@ public class IndexRoutingTable implements Iterable<IndexShardRoutingTable> {
return new GroupShardsIterator(set);
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
IndexRoutingTable that = (IndexRoutingTable) o;
if (!index.equals(that.index)) return false;
if (!shards.equals(that.shards)) return false;
return true;
}
@Override
public int hashCode() {
int result = index.hashCode();
result = 31 * result + shards.hashCode();
return result;
}
public void validate() throws RoutingValidationException {
}
@Override
public IndexRoutingTable readFrom(StreamInput in) throws IOException {
String index = in.readString();
Builder builder = new Builder(index);
int size = in.readVInt();
for (int i = 0; i < size; i++) {
builder.addIndexShard(IndexShardRoutingTable.Builder.readFromThin(in, index));
}
return builder.build();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeString(index);
out.writeVInt(shards.size());
for (IndexShardRoutingTable indexShard : this) {
IndexShardRoutingTable.Builder.writeToThin(indexShard, out);
}
}
public static Builder builder(String index) {
return new Builder(index);
}
@ -338,30 +383,7 @@ public class IndexRoutingTable implements Iterable<IndexShardRoutingTable> {
* @throws IOException if something happens during read
*/
public static IndexRoutingTable readFrom(StreamInput in) throws IOException {
String index = in.readString();
Builder builder = new Builder(index);
int size = in.readVInt();
for (int i = 0; i < size; i++) {
builder.addIndexShard(IndexShardRoutingTable.Builder.readFromThin(in, index));
}
return builder.build();
}
/**
* Writes an {@link IndexRoutingTable} to a {@link StreamOutput}.
*
* @param index {@link IndexRoutingTable} to write
* @param out {@link StreamOutput} to write to
* @throws IOException if something happens during write
*/
public static void writeTo(IndexRoutingTable index, StreamOutput out) throws IOException {
out.writeString(index.index());
out.writeVInt(index.shards.size());
for (IndexShardRoutingTable indexShard : index) {
IndexShardRoutingTable.Builder.writeToThin(indexShard, out);
}
return PROTO.readFrom(in);
}
/**

View File

@ -347,6 +347,28 @@ public class IndexShardRoutingTable implements Iterable<ShardRouting> {
return new PlainShardIterator(shardId, ordered);
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
IndexShardRoutingTable that = (IndexShardRoutingTable) o;
if (primaryAllocatedPostApi != that.primaryAllocatedPostApi) return false;
if (!shardId.equals(that.shardId)) return false;
if (!shards.equals(that.shards)) return false;
return true;
}
@Override
public int hashCode() {
int result = shardId.hashCode();
result = 31 * result + shards.hashCode();
result = 31 * result + (primaryAllocatedPostApi ? 1 : 0);
return result;
}
/**
* Returns <code>true</code> iff all shards in the routing table are started otherwise <code>false</code>
*/

View File

@ -21,7 +21,7 @@ package org.elasticsearch.cluster.routing;
import com.carrotsearch.hppc.IntSet;
import com.google.common.collect.*;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.*;
import org.elasticsearch.cluster.metadata.IndexMetaData;
import org.elasticsearch.cluster.metadata.MetaData;
import org.elasticsearch.common.io.stream.StreamInput;
@ -44,7 +44,9 @@ import static com.google.common.collect.Maps.newHashMap;
*
* @see IndexRoutingTable
*/
public class RoutingTable implements Iterable<IndexRoutingTable> {
public class RoutingTable implements Iterable<IndexRoutingTable>, Diffable<RoutingTable> {
public static RoutingTable PROTO = builder().build();
public static final RoutingTable EMPTY_ROUTING_TABLE = builder().build();
@ -254,6 +256,66 @@ public class RoutingTable implements Iterable<IndexRoutingTable> {
return new GroupShardsIterator(set);
}
@Override
public Diff<RoutingTable> diff(RoutingTable previousState) {
return new RoutingTableDiff(previousState, this);
}
@Override
public Diff<RoutingTable> readDiffFrom(StreamInput in) throws IOException {
return new RoutingTableDiff(in);
}
@Override
public RoutingTable readFrom(StreamInput in) throws IOException {
Builder builder = new Builder();
builder.version = in.readLong();
int size = in.readVInt();
for (int i = 0; i < size; i++) {
IndexRoutingTable index = IndexRoutingTable.Builder.readFrom(in);
builder.add(index);
}
return builder.build();
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeLong(version);
out.writeVInt(indicesRouting.size());
for (IndexRoutingTable index : indicesRouting.values()) {
index.writeTo(out);
}
}
private static class RoutingTableDiff implements Diff<RoutingTable> {
private final long version;
private final Diff<ImmutableMap<String, IndexRoutingTable>> indicesRouting;
public RoutingTableDiff(RoutingTable before, RoutingTable after) {
version = after.version;
indicesRouting = DiffableUtils.diff(before.indicesRouting, after.indicesRouting);
}
public RoutingTableDiff(StreamInput in) throws IOException {
version = in.readLong();
indicesRouting = DiffableUtils.readImmutableMapDiff(in, IndexRoutingTable.PROTO);
}
@Override
public RoutingTable apply(RoutingTable part) {
return new RoutingTable(version, indicesRouting.apply(part.indicesRouting));
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeLong(version);
indicesRouting.writeTo(out);
}
}
public static Builder builder() {
return new Builder();
}
@ -403,6 +465,11 @@ public class RoutingTable implements Iterable<IndexRoutingTable> {
return this;
}
public Builder indicesRouting(ImmutableMap<String, IndexRoutingTable> indicesRouting) {
this.indicesRouting.putAll(indicesRouting);
return this;
}
public Builder remove(String index) {
indicesRouting.remove(index);
return this;
@ -422,23 +489,7 @@ public class RoutingTable implements Iterable<IndexRoutingTable> {
}
public static RoutingTable readFrom(StreamInput in) throws IOException {
Builder builder = new Builder();
builder.version = in.readLong();
int size = in.readVInt();
for (int i = 0; i < size; i++) {
IndexRoutingTable index = IndexRoutingTable.Builder.readFrom(in);
builder.add(index);
}
return builder.build();
}
public static void writeTo(RoutingTable table, StreamOutput out) throws IOException {
out.writeLong(table.version);
out.writeVInt(table.indicesRouting.size());
for (IndexRoutingTable index : table.indicesRouting.values()) {
IndexRoutingTable.Builder.writeTo(index, out);
}
return PROTO.readFrom(in);
}
}
@ -450,5 +501,4 @@ public class RoutingTable implements Iterable<IndexRoutingTable> {
return sb.toString();
}
}

View File

@ -142,19 +142,19 @@ public class DiskThresholdDecider extends AllocationDecider {
private void warnAboutDiskIfNeeded(DiskUsage usage) {
// Check absolute disk values
if (usage.getFreeBytes() < DiskThresholdDecider.this.freeBytesThresholdHigh.bytes()) {
logger.warn("high disk watermark [{}] exceeded on {}, shards will be relocated away from this node",
logger.warn("high disk watermark [{} free] exceeded on {}, shards will be relocated away from this node",
DiskThresholdDecider.this.freeBytesThresholdHigh, usage);
} else if (usage.getFreeBytes() < DiskThresholdDecider.this.freeBytesThresholdLow.bytes()) {
logger.info("low disk watermark [{}] exceeded on {}, replicas will not be assigned to this node",
logger.info("low disk watermark [{} free] exceeded on {}, replicas will not be assigned to this node",
DiskThresholdDecider.this.freeBytesThresholdLow, usage);
}
// Check percentage disk values
if (usage.getFreeDiskAsPercentage() < DiskThresholdDecider.this.freeDiskThresholdHigh) {
logger.warn("high disk watermark [{}] exceeded on {}, shards will be relocated away from this node",
logger.warn("high disk watermark [{} free] exceeded on {}, shards will be relocated away from this node",
Strings.format1Decimals(DiskThresholdDecider.this.freeDiskThresholdHigh, "%"), usage);
} else if (usage.getFreeDiskAsPercentage() < DiskThresholdDecider.this.freeDiskThresholdLow) {
logger.info("low disk watermark [{}] exceeded on {}, replicas will not be assigned to this node",
logger.info("low disk watermark [{} free] exceeded on {}, replicas will not be assigned to this node",
Strings.format1Decimals(DiskThresholdDecider.this.freeDiskThresholdLow, "%"), usage);
}
}

View File

@ -401,7 +401,7 @@ public class InternalClusterService extends AbstractLifecycleComponent<ClusterSe
Discovery.AckListener ackListener = new NoOpAckListener();
if (newClusterState.nodes().localNodeMaster()) {
// only the master controls the version numbers
Builder builder = ClusterState.builder(newClusterState).version(newClusterState.version() + 1);
Builder builder = ClusterState.builder(newClusterState).incrementVersion();
if (previousClusterState.routingTable() != newClusterState.routingTable()) {
builder.routingTable(RoutingTable.builder(newClusterState.routingTable()).version(newClusterState.routingTable().version() + 1));
}
@ -466,7 +466,7 @@ public class InternalClusterService extends AbstractLifecycleComponent<ClusterSe
// we don't want to notify
if (newClusterState.nodes().localNodeMaster()) {
logger.debug("publishing cluster state version {}", newClusterState.version());
discoveryService.publish(newClusterState, ackListener);
discoveryService.publish(clusterChangedEvent, ackListener);
}
// update the current cluster state
@ -511,9 +511,9 @@ public class InternalClusterService extends AbstractLifecycleComponent<ClusterSe
((ProcessedClusterStateUpdateTask) updateTask).clusterStateProcessed(source, previousClusterState, newClusterState);
}
logger.debug("processing [{}]: done applying updated cluster_state (version: {})", source, newClusterState.version());
logger.debug("processing [{}]: done applying updated cluster_state (version: {}, uuid: {})", source, newClusterState.version(), newClusterState.uuid());
} catch (Throwable t) {
StringBuilder sb = new StringBuilder("failed to apply updated cluster state:\nversion [").append(newClusterState.version()).append("], source [").append(source).append("]\n");
StringBuilder sb = new StringBuilder("failed to apply updated cluster state:\nversion [").append(newClusterState.version()).append("], uuid [").append(newClusterState.uuid()).append("], source [").append(source).append("]\n");
sb.append(newClusterState.nodes().prettyPrint());
sb.append(newClusterState.routingTable().prettyPrint());
sb.append(newClusterState.readOnlyRoutingNodes().prettyPrint());

View File

@ -95,6 +95,7 @@ public class ClusterDynamicSettingsModule extends AbstractModule {
clusterDynamicSettings.addDynamicSetting(SnapshotInProgressAllocationDecider.CLUSTER_ROUTING_ALLOCATION_SNAPSHOT_RELOCATION_ENABLED);
clusterDynamicSettings.addDynamicSetting(DestructiveOperations.REQUIRES_NAME);
clusterDynamicSettings.addDynamicSetting(DiscoverySettings.PUBLISH_TIMEOUT, Validator.TIME_NON_NEGATIVE);
clusterDynamicSettings.addDynamicSetting(DiscoverySettings.PUBLISH_DIFF_ENABLE, Validator.BOOLEAN);
clusterDynamicSettings.addDynamicSetting(HierarchyCircuitBreakerService.TOTAL_CIRCUIT_BREAKER_LIMIT_SETTING, Validator.MEMORY_SIZE);
clusterDynamicSettings.addDynamicSetting(HierarchyCircuitBreakerService.FIELDDATA_CIRCUIT_BREAKER_LIMIT_SETTING, Validator.MEMORY_SIZE);
clusterDynamicSettings.addDynamicSetting(HierarchyCircuitBreakerService.FIELDDATA_CIRCUIT_BREAKER_OVERHEAD_SETTING, Validator.NON_NEGATIVE_DOUBLE);

View File

@ -0,0 +1,30 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.common.io.stream;
import java.io.IOException;
public interface StreamableReader<T> {
/**
* Reads a copy of an object with the same type form the stream input
*
* The caller object remains unchanged.
*/
T readFrom(StreamInput in) throws IOException;
}

View File

@ -0,0 +1,30 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.common.io.stream;
import java.io.IOException;
public interface Writeable<T> extends StreamableReader<T> {
/**
* Writes the current object into the output stream out
*/
void writeTo(StreamOutput out) throws IOException;
}

View File

@ -38,6 +38,10 @@ public class InetSocketTransportAddress implements TransportAddress {
InetSocketTransportAddress.resolveAddress = resolveAddress;
}
public static boolean getResolveAddress() {
return resolveAddress;
}
private InetSocketAddress address;
InetSocketTransportAddress() {

View File

@ -0,0 +1,37 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.common.xcontent;
/**
* Simple data structure representing the line and column number of a position
* in some XContent e.g. JSON. Locations are typically used to communicate the
* position of a parsing error to end users and consequently have line and
* column numbers starting from 1.
*/
public class XContentLocation {
public final int lineNumber;
public final int columnNumber;
public XContentLocation(int lineNumber, int columnNumber) {
super();
this.lineNumber = lineNumber;
this.columnNumber = columnNumber;
}
}

View File

@ -241,4 +241,12 @@ public interface XContentParser extends Releasable {
*
*/
byte[] binaryValue() throws IOException;
/**
* Used for error reporting to highlight where syntax errors occur in
* content being parsed.
*
* @return last token's location or null if cannot be determined
*/
XContentLocation getTokenLocation();
}

View File

@ -19,10 +19,13 @@
package org.elasticsearch.common.xcontent.json;
import com.fasterxml.jackson.core.JsonLocation;
import com.fasterxml.jackson.core.JsonParser;
import com.fasterxml.jackson.core.JsonToken;
import org.apache.lucene.util.BytesRef;
import org.apache.lucene.util.IOUtils;
import org.elasticsearch.common.xcontent.XContentLocation;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.common.xcontent.support.AbstractXContentParser;
@ -187,6 +190,15 @@ public class JsonXContentParser extends AbstractXContentParser {
return parser.getBinaryValue();
}
@Override
public XContentLocation getTokenLocation() {
JsonLocation loc = parser.getTokenLocation();
if (loc == null) {
return null;
}
return new XContentLocation(loc.getLineNr(), loc.getColumnNr());
}
@Override
public void close() {
IOUtils.closeWhileHandlingException(parser);

View File

@ -19,6 +19,7 @@
package org.elasticsearch.discovery;
import org.elasticsearch.cluster.ClusterChangedEvent;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.cluster.routing.allocation.AllocationService;
@ -59,7 +60,7 @@ public interface Discovery extends LifecycleComponent<Discovery> {
* The {@link AckListener} allows to keep track of the ack received from nodes, and verify whether
* they updated their own cluster state or not.
*/
void publish(ClusterState clusterState, AckListener ackListener);
void publish(ClusterChangedEvent clusterChangedEvent, AckListener ackListener);
public static interface AckListener {
void onNodeAck(DiscoveryNode node, @Nullable Throwable t);

View File

@ -21,6 +21,7 @@ package org.elasticsearch.discovery;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.ElasticsearchTimeoutException;
import org.elasticsearch.cluster.ClusterChangedEvent;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.block.ClusterBlock;
import org.elasticsearch.cluster.node.DiscoveryNode;
@ -132,9 +133,9 @@ public class DiscoveryService extends AbstractLifecycleComponent<DiscoveryServic
* The {@link org.elasticsearch.discovery.Discovery.AckListener} allows to acknowledge the publish
* event based on the response gotten from all nodes
*/
public void publish(ClusterState clusterState, Discovery.AckListener ackListener) {
public void publish(ClusterChangedEvent clusterChangedEvent, Discovery.AckListener ackListener) {
if (lifecycle.started()) {
discovery.publish(clusterState, ackListener);
discovery.publish(clusterChangedEvent, ackListener);
}
}

View File

@ -37,16 +37,19 @@ public class DiscoverySettings extends AbstractComponent {
public static final String PUBLISH_TIMEOUT = "discovery.zen.publish_timeout";
public static final String NO_MASTER_BLOCK = "discovery.zen.no_master_block";
public static final String PUBLISH_DIFF_ENABLE = "discovery.zen.publish_diff.enable";
public static final TimeValue DEFAULT_PUBLISH_TIMEOUT = TimeValue.timeValueSeconds(30);
public static final String DEFAULT_NO_MASTER_BLOCK = "write";
public final static int NO_MASTER_BLOCK_ID = 2;
public final static boolean DEFAULT_PUBLISH_DIFF_ENABLE = true;
public final static ClusterBlock NO_MASTER_BLOCK_ALL = new ClusterBlock(NO_MASTER_BLOCK_ID, "no master", true, true, RestStatus.SERVICE_UNAVAILABLE, ClusterBlockLevel.ALL);
public final static ClusterBlock NO_MASTER_BLOCK_WRITES = new ClusterBlock(NO_MASTER_BLOCK_ID, "no master", true, false, RestStatus.SERVICE_UNAVAILABLE, EnumSet.of(ClusterBlockLevel.WRITE, ClusterBlockLevel.METADATA_WRITE));
private volatile ClusterBlock noMasterBlock;
private volatile TimeValue publishTimeout = DEFAULT_PUBLISH_TIMEOUT;
private volatile boolean publishDiff = DEFAULT_PUBLISH_DIFF_ENABLE;
@Inject
public DiscoverySettings(Settings settings, NodeSettingsService nodeSettingsService) {
@ -54,6 +57,7 @@ public class DiscoverySettings extends AbstractComponent {
nodeSettingsService.addListener(new ApplySettings());
this.noMasterBlock = parseNoMasterBlock(settings.get(NO_MASTER_BLOCK, DEFAULT_NO_MASTER_BLOCK));
this.publishTimeout = settings.getAsTime(PUBLISH_TIMEOUT, publishTimeout);
this.publishDiff = settings.getAsBoolean(PUBLISH_DIFF_ENABLE, DEFAULT_PUBLISH_DIFF_ENABLE);
}
/**
@ -67,6 +71,8 @@ public class DiscoverySettings extends AbstractComponent {
return noMasterBlock;
}
public boolean getPublishDiff() { return publishDiff;}
private class ApplySettings implements NodeSettingsService.Listener {
@Override
public void onRefreshSettings(Settings settings) {
@ -84,6 +90,13 @@ public class DiscoverySettings extends AbstractComponent {
noMasterBlock = newNoMasterBlock;
}
}
Boolean newPublishDiff = settings.getAsBoolean(PUBLISH_DIFF_ENABLE, null);
if (newPublishDiff != null) {
if (newPublishDiff != publishDiff) {
logger.info("updating [{}] from [{}] to [{}]", PUBLISH_DIFF_ENABLE, publishDiff, newPublishDiff);
publishDiff = newPublishDiff;
}
}
}
}

View File

@ -32,6 +32,8 @@ import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;
import org.elasticsearch.common.component.AbstractLifecycleComponent;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.inject.internal.Nullable;
import org.elasticsearch.common.io.stream.BytesStreamInput;
import org.elasticsearch.common.io.stream.BytesStreamOutput;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.util.concurrent.ConcurrentCollections;
@ -45,6 +47,8 @@ import java.util.Set;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.CopyOnWriteArrayList;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
import static com.google.common.collect.Sets.newHashSet;
import static org.elasticsearch.cluster.ClusterState.Builder;
@ -75,6 +79,8 @@ public class LocalDiscovery extends AbstractLifecycleComponent<Discovery> implem
private static final ConcurrentMap<ClusterName, ClusterGroup> clusterGroups = ConcurrentCollections.newConcurrentMap();
private volatile ClusterState lastProcessedClusterState;
@Inject
public LocalDiscovery(Settings settings, ClusterName clusterName, TransportService transportService, ClusterService clusterService,
DiscoveryNodeService discoveryNodeService, Version version, DiscoverySettings discoverySettings) {
@ -273,7 +279,7 @@ public class LocalDiscovery extends AbstractLifecycleComponent<Discovery> implem
}
@Override
public void publish(ClusterState clusterState, final Discovery.AckListener ackListener) {
public void publish(ClusterChangedEvent clusterChangedEvent, final Discovery.AckListener ackListener) {
if (!master) {
throw new IllegalStateException("Shouldn't publish state when not master");
}
@ -286,7 +292,7 @@ public class LocalDiscovery extends AbstractLifecycleComponent<Discovery> implem
}
nodesToPublishTo.add(localDiscovery.localNode);
}
publish(members, clusterState, new AckClusterStatePublishResponseHandler(nodesToPublishTo, ackListener));
publish(members, clusterChangedEvent, new AckClusterStatePublishResponseHandler(nodesToPublishTo, ackListener));
}
}
@ -299,17 +305,47 @@ public class LocalDiscovery extends AbstractLifecycleComponent<Discovery> implem
return members.toArray(new LocalDiscovery[members.size()]);
}
private void publish(LocalDiscovery[] members, ClusterState clusterState, final BlockingClusterStatePublishResponseHandler publishResponseHandler) {
private void publish(LocalDiscovery[] members, ClusterChangedEvent clusterChangedEvent, final BlockingClusterStatePublishResponseHandler publishResponseHandler) {
try {
// we do the marshaling intentionally, to check it works well...
final byte[] clusterStateBytes = Builder.toBytes(clusterState);
byte[] clusterStateBytes = null;
byte[] clusterStateDiffBytes = null;
ClusterState clusterState = clusterChangedEvent.state();
for (final LocalDiscovery discovery : members) {
if (discovery.master) {
continue;
}
final ClusterState nodeSpecificClusterState = ClusterState.Builder.fromBytes(clusterStateBytes, discovery.localNode);
ClusterState newNodeSpecificClusterState = null;
synchronized (this) {
// we do the marshaling intentionally, to check it works well...
// check if we publsihed cluster state at least once and node was in the cluster when we published cluster state the last time
if (discovery.lastProcessedClusterState != null && clusterChangedEvent.previousState().nodes().nodeExists(discovery.localNode.id())) {
// both conditions are true - which means we can try sending cluster state as diffs
if (clusterStateDiffBytes == null) {
Diff diff = clusterState.diff(clusterChangedEvent.previousState());
BytesStreamOutput os = new BytesStreamOutput();
diff.writeTo(os);
clusterStateDiffBytes = os.bytes().toBytes();
}
try {
newNodeSpecificClusterState = discovery.lastProcessedClusterState.readDiffFrom(new BytesStreamInput(clusterStateDiffBytes)).apply(discovery.lastProcessedClusterState);
logger.debug("sending diff cluster state version with size {} to [{}]", clusterStateDiffBytes.length, discovery.localNode.getName());
} catch (IncompatibleClusterStateVersionException ex) {
logger.warn("incompatible cluster state version - resending complete cluster state", ex);
}
}
if (newNodeSpecificClusterState == null) {
if (clusterStateBytes == null) {
clusterStateBytes = Builder.toBytes(clusterState);
}
newNodeSpecificClusterState = ClusterState.Builder.fromBytes(clusterStateBytes, discovery.localNode);
}
discovery.lastProcessedClusterState = newNodeSpecificClusterState;
}
final ClusterState nodeSpecificClusterState = newNodeSpecificClusterState;
nodeSpecificClusterState.status(ClusterState.ClusterStateStatus.RECEIVED);
// ignore cluster state messages that do not include "me", not in the game yet...
if (nodeSpecificClusterState.nodes().localNode() != null) {

View File

@ -22,7 +22,6 @@ package org.elasticsearch.discovery.zen;
import com.google.common.base.Objects;
import com.google.common.collect.Lists;
import com.google.common.collect.Sets;
import org.elasticsearch.ElasticsearchException;
import org.elasticsearch.ExceptionsHelper;
import org.elasticsearch.cluster.*;
import org.elasticsearch.cluster.block.ClusterBlocks;
@ -329,12 +328,12 @@ public class ZenDiscovery extends AbstractLifecycleComponent<Discovery> implemen
@Override
public void publish(ClusterState clusterState, AckListener ackListener) {
if (!clusterState.getNodes().localNodeMaster()) {
public void publish(ClusterChangedEvent clusterChangedEvent, AckListener ackListener) {
if (!clusterChangedEvent.state().getNodes().localNodeMaster()) {
throw new IllegalStateException("Shouldn't publish state when not master");
}
nodesFD.updateNodesAndPing(clusterState);
publishClusterState.publish(clusterState, ackListener);
nodesFD.updateNodesAndPing(clusterChangedEvent.state());
publishClusterState.publish(clusterChangedEvent, ackListener);
}
/**

View File

@ -21,8 +21,12 @@ package org.elasticsearch.discovery.zen.publish;
import com.google.common.collect.Maps;
import org.elasticsearch.Version;
import org.elasticsearch.cluster.ClusterChangedEvent;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.Diff;
import org.elasticsearch.cluster.IncompatibleClusterStateVersionException;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.common.Nullable;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.component.AbstractComponent;
import org.elasticsearch.common.compress.Compressor;
@ -40,10 +44,13 @@ import org.elasticsearch.discovery.zen.DiscoveryNodesProvider;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.*;
import java.io.IOException;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
/**
*
@ -83,73 +90,43 @@ public class PublishClusterStateAction extends AbstractComponent {
transportService.removeHandler(ACTION_NAME);
}
public void publish(ClusterState clusterState, final Discovery.AckListener ackListener) {
Set<DiscoveryNode> nodesToPublishTo = new HashSet<>(clusterState.nodes().size());
public void publish(ClusterChangedEvent clusterChangedEvent, final Discovery.AckListener ackListener) {
Set<DiscoveryNode> nodesToPublishTo = new HashSet<>(clusterChangedEvent.state().nodes().size());
DiscoveryNode localNode = nodesProvider.nodes().localNode();
for (final DiscoveryNode node : clusterState.nodes()) {
for (final DiscoveryNode node : clusterChangedEvent.state().nodes()) {
if (node.equals(localNode)) {
continue;
}
nodesToPublishTo.add(node);
}
publish(clusterState, nodesToPublishTo, new AckClusterStatePublishResponseHandler(nodesToPublishTo, ackListener));
publish(clusterChangedEvent, nodesToPublishTo, new AckClusterStatePublishResponseHandler(nodesToPublishTo, ackListener));
}
private void publish(final ClusterState clusterState, final Set<DiscoveryNode> nodesToPublishTo,
private void publish(final ClusterChangedEvent clusterChangedEvent, final Set<DiscoveryNode> nodesToPublishTo,
final BlockingClusterStatePublishResponseHandler publishResponseHandler) {
Map<Version, BytesReference> serializedStates = Maps.newHashMap();
Map<Version, BytesReference> serializedDiffs = Maps.newHashMap();
final ClusterState clusterState = clusterChangedEvent.state();
final ClusterState previousState = clusterChangedEvent.previousState();
final AtomicBoolean timedOutWaitingForNodes = new AtomicBoolean(false);
final TimeValue publishTimeout = discoverySettings.getPublishTimeout();
final boolean sendFullVersion = !discoverySettings.getPublishDiff() || previousState == null;
Diff<ClusterState> diff = null;
for (final DiscoveryNode node : nodesToPublishTo) {
// try and serialize the cluster state once (or per version), so we don't serialize it
// per node when we send it over the wire, compress it while we are at it...
BytesReference bytes = serializedStates.get(node.version());
if (bytes == null) {
try {
BytesStreamOutput bStream = new BytesStreamOutput();
StreamOutput stream = CompressorFactory.defaultCompressor().streamOutput(bStream);
stream.setVersion(node.version());
ClusterState.Builder.writeTo(clusterState, stream);
stream.close();
bytes = bStream.bytes();
serializedStates.put(node.version(), bytes);
} catch (Throwable e) {
logger.warn("failed to serialize cluster_state before publishing it to node {}", e, node);
publishResponseHandler.onFailure(node, e);
continue;
// we don't send full version if node didn't exist in the previous version of cluster state
if (sendFullVersion || !previousState.nodes().nodeExists(node.id())) {
sendFullClusterState(clusterState, serializedStates, node, timedOutWaitingForNodes, publishTimeout, publishResponseHandler);
} else {
if (diff == null) {
diff = clusterState.diff(previousState);
}
}
try {
TransportRequestOptions options = TransportRequestOptions.options().withType(TransportRequestOptions.Type.STATE).withCompress(false);
// no need to put a timeout on the options here, because we want the response to eventually be received
// and not log an error if it arrives after the timeout
transportService.sendRequest(node, ACTION_NAME,
new BytesTransportRequest(bytes, node.version()),
options, // no need to compress, we already compressed the bytes
new EmptyTransportResponseHandler(ThreadPool.Names.SAME) {
@Override
public void handleResponse(TransportResponse.Empty response) {
if (timedOutWaitingForNodes.get()) {
logger.debug("node {} responded for cluster state [{}] (took longer than [{}])", node, clusterState.version(), publishTimeout);
}
publishResponseHandler.onResponse(node);
}
@Override
public void handleException(TransportException exp) {
logger.debug("failed to send cluster state to {}", exp, node);
publishResponseHandler.onFailure(node, exp);
}
});
} catch (Throwable t) {
logger.debug("error sending cluster state to {}", t, node);
publishResponseHandler.onFailure(node, t);
sendClusterStateDiff(clusterState, diff, serializedDiffs, node, timedOutWaitingForNodes, publishTimeout, publishResponseHandler);
}
}
@ -171,7 +148,107 @@ public class PublishClusterStateAction extends AbstractComponent {
}
}
private void sendFullClusterState(ClusterState clusterState, @Nullable Map<Version, BytesReference> serializedStates,
DiscoveryNode node, AtomicBoolean timedOutWaitingForNodes, TimeValue publishTimeout,
BlockingClusterStatePublishResponseHandler publishResponseHandler) {
BytesReference bytes = null;
if (serializedStates != null) {
bytes = serializedStates.get(node.version());
}
if (bytes == null) {
try {
bytes = serializeFullClusterState(clusterState, node.version());
if (serializedStates != null) {
serializedStates.put(node.version(), bytes);
}
} catch (Throwable e) {
logger.warn("failed to serialize cluster_state before publishing it to node {}", e, node);
publishResponseHandler.onFailure(node, e);
return;
}
}
publishClusterStateToNode(clusterState, bytes, node, timedOutWaitingForNodes, publishTimeout, publishResponseHandler, false);
}
private void sendClusterStateDiff(ClusterState clusterState, Diff diff, Map<Version, BytesReference> serializedDiffs, DiscoveryNode node,
AtomicBoolean timedOutWaitingForNodes, TimeValue publishTimeout,
BlockingClusterStatePublishResponseHandler publishResponseHandler) {
BytesReference bytes = serializedDiffs.get(node.version());
if (bytes == null) {
try {
bytes = serializeDiffClusterState(diff, node.version());
serializedDiffs.put(node.version(), bytes);
} catch (Throwable e) {
logger.warn("failed to serialize diff of cluster_state before publishing it to node {}", e, node);
publishResponseHandler.onFailure(node, e);
return;
}
}
publishClusterStateToNode(clusterState, bytes, node, timedOutWaitingForNodes, publishTimeout, publishResponseHandler, true);
}
private void publishClusterStateToNode(final ClusterState clusterState, BytesReference bytes,
final DiscoveryNode node, final AtomicBoolean timedOutWaitingForNodes,
final TimeValue publishTimeout,
final BlockingClusterStatePublishResponseHandler publishResponseHandler,
final boolean sendDiffs) {
try {
TransportRequestOptions options = TransportRequestOptions.options().withType(TransportRequestOptions.Type.STATE).withCompress(false);
// no need to put a timeout on the options here, because we want the response to eventually be received
// and not log an error if it arrives after the timeout
transportService.sendRequest(node, ACTION_NAME,
new BytesTransportRequest(bytes, node.version()),
options, // no need to compress, we already compressed the bytes
new EmptyTransportResponseHandler(ThreadPool.Names.SAME) {
@Override
public void handleResponse(TransportResponse.Empty response) {
if (timedOutWaitingForNodes.get()) {
logger.debug("node {} responded for cluster state [{}] (took longer than [{}])", node, clusterState.version(), publishTimeout);
}
publishResponseHandler.onResponse(node);
}
@Override
public void handleException(TransportException exp) {
if (sendDiffs && exp.unwrapCause() instanceof IncompatibleClusterStateVersionException) {
logger.debug("resending full cluster state to node {} reason {}", node, exp.getDetailedMessage());
sendFullClusterState(clusterState, null, node, timedOutWaitingForNodes, publishTimeout, publishResponseHandler);
} else {
logger.debug("failed to send cluster state to {}", exp, node);
publishResponseHandler.onFailure(node, exp);
}
}
});
} catch (Throwable t) {
logger.warn("error sending cluster state to {}", t, node);
publishResponseHandler.onFailure(node, t);
}
}
public static BytesReference serializeFullClusterState(ClusterState clusterState, Version nodeVersion) throws IOException {
BytesStreamOutput bStream = new BytesStreamOutput();
StreamOutput stream = CompressorFactory.defaultCompressor().streamOutput(bStream);
stream.setVersion(nodeVersion);
stream.writeBoolean(true);
clusterState.writeTo(stream);
stream.close();
return bStream.bytes();
}
public static BytesReference serializeDiffClusterState(Diff diff, Version nodeVersion) throws IOException {
BytesStreamOutput bStream = new BytesStreamOutput();
StreamOutput stream = CompressorFactory.defaultCompressor().streamOutput(bStream);
stream.setVersion(nodeVersion);
stream.writeBoolean(false);
diff.writeTo(stream);
stream.close();
return bStream.bytes();
}
private class PublishClusterStateRequestHandler implements TransportRequestHandler<BytesTransportRequest> {
private ClusterState lastSeenClusterState;
@Override
public void messageReceived(BytesTransportRequest request, final TransportChannel channel) throws Exception {
@ -183,11 +260,24 @@ public class PublishClusterStateAction extends AbstractComponent {
in = request.bytes().streamInput();
}
in.setVersion(request.version());
ClusterState clusterState = ClusterState.Builder.readFrom(in, nodesProvider.nodes().localNode());
clusterState.status(ClusterState.ClusterStateStatus.RECEIVED);
logger.debug("received cluster state version {}", clusterState.version());
synchronized (this) {
// If true we received full cluster state - otherwise diffs
if (in.readBoolean()) {
lastSeenClusterState = ClusterState.Builder.readFrom(in, nodesProvider.nodes().localNode());
logger.debug("received full cluster state version {} with size {}", lastSeenClusterState.version(), request.bytes().length());
} else if (lastSeenClusterState != null) {
Diff<ClusterState> diff = lastSeenClusterState.readDiffFrom(in);
lastSeenClusterState = diff.apply(lastSeenClusterState);
logger.debug("received diff cluster state version {} with uuid {}, diff size {}", lastSeenClusterState.version(), lastSeenClusterState.uuid(), request.bytes().length());
} else {
logger.debug("received diff for but don't have any local cluster state - requesting full state");
throw new IncompatibleClusterStateVersionException("have no local cluster state");
}
lastSeenClusterState.status(ClusterState.ClusterStateStatus.RECEIVED);
}
try {
listener.onNewClusterState(clusterState, new NewClusterStateListener.NewStateProcessed() {
listener.onNewClusterState(lastSeenClusterState, new NewClusterStateListener.NewStateProcessed() {
@Override
public void onNewClusterStateProcessed() {
try {
@ -207,7 +297,7 @@ public class PublishClusterStateAction extends AbstractComponent {
}
});
} catch (Exception e) {
logger.warn("unexpected error while processing cluster state version [{}]", e, clusterState.version());
logger.warn("unexpected error while processing cluster state version [{}]", e, lastSeenClusterState.version());
try {
channel.sendResponse(e);
} catch (Throwable e1) {

View File

@ -31,7 +31,7 @@ import org.elasticsearch.common.component.AbstractComponent;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.NodeEnvironment;
import org.elasticsearch.indices.IndicesService;
import java.nio.file.Path;

View File

@ -198,7 +198,7 @@ public class LocalAllocateDangledIndices extends AbstractComponent {
fromNode.writeTo(out);
out.writeVInt(indices.length);
for (IndexMetaData indexMetaData : indices) {
IndexMetaData.Builder.writeTo(indexMetaData, out);
indexMetaData.writeTo(out);
}
}
}

View File

@ -221,7 +221,7 @@ public class TransportNodesListGatewayMetaState extends TransportNodesOperationA
out.writeBoolean(false);
} else {
out.writeBoolean(true);
MetaData.Builder.writeTo(metaData, out);
metaData.writeTo(out);
}
}
}

View File

@ -223,7 +223,7 @@ public class PercolatorQueriesRegistry extends AbstractIndexShardComponent imple
context.setMapUnmappedFieldAsString(mapUnmappedFieldsAsString ? true : false);
return queryParserService.parseInnerQuery(context);
} catch (IOException e) {
throw new QueryParsingException(queryParserService.index(), "Failed to parse", e);
throw new QueryParsingException(context, "Failed to parse", e);
} finally {
if (type != null) {
QueryParseContext.setTypes(previousTypes);

View File

@ -100,14 +100,14 @@ public class AndFilterParser implements FilterParser {
} else if ("_cache_key".equals(currentFieldName) || "_cacheKey".equals(currentFieldName)) {
cacheKey = new HashedBytesRef(parser.text());
} else {
throw new QueryParsingException(parseContext.index(), "[and] filter does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[and] filter does not support [" + currentFieldName + "]");
}
}
}
}
if (!filtersFound) {
throw new QueryParsingException(parseContext.index(), "[and] filter requires 'filters' to be set on it'");
throw new QueryParsingException(parseContext, "[and] filter requires 'filters' to be set on it'");
}
if (filters.isEmpty()) {

View File

@ -85,7 +85,7 @@ public class BoolFilterParser implements FilterParser {
boolFilter.add(new BooleanClause(filter, BooleanClause.Occur.SHOULD));
}
} else {
throw new QueryParsingException(parseContext.index(), "[bool] filter does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[bool] filter does not support [" + currentFieldName + "]");
}
} else if (token == XContentParser.Token.START_ARRAY) {
if ("must".equals(currentFieldName)) {
@ -114,7 +114,7 @@ public class BoolFilterParser implements FilterParser {
}
}
} else {
throw new QueryParsingException(parseContext.index(), "[bool] filter does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[bool] filter does not support [" + currentFieldName + "]");
}
} else if (token.isValue()) {
if ("_cache".equals(currentFieldName)) {
@ -124,13 +124,13 @@ public class BoolFilterParser implements FilterParser {
} else if ("_cache_key".equals(currentFieldName) || "_cacheKey".equals(currentFieldName)) {
cacheKey = new HashedBytesRef(parser.text());
} else {
throw new QueryParsingException(parseContext.index(), "[bool] filter does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[bool] filter does not support [" + currentFieldName + "]");
}
}
}
if (!hasAnyFilter) {
throw new QueryParsingException(parseContext.index(), "[bool] filter has no inner should/must/must_not elements");
throw new QueryParsingException(parseContext, "[bool] filter has no inner should/must/must_not elements");
}
if (boolFilter.clauses().isEmpty()) {

View File

@ -85,7 +85,7 @@ public class BoolQueryParser implements QueryParser {
clauses.add(new BooleanClause(query, BooleanClause.Occur.SHOULD));
}
} else {
throw new QueryParsingException(parseContext.index(), "[bool] query does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[bool] query does not support [" + currentFieldName + "]");
}
} else if (token == XContentParser.Token.START_ARRAY) {
if ("must".equals(currentFieldName)) {
@ -110,7 +110,7 @@ public class BoolQueryParser implements QueryParser {
}
}
} else {
throw new QueryParsingException(parseContext.index(), "bool query does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "bool query does not support [" + currentFieldName + "]");
}
} else if (token.isValue()) {
if ("disable_coord".equals(currentFieldName) || "disableCoord".equals(currentFieldName)) {
@ -126,7 +126,7 @@ public class BoolQueryParser implements QueryParser {
} else if ("_name".equals(currentFieldName)) {
queryName = parser.text();
} else {
throw new QueryParsingException(parseContext.index(), "[bool] query does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[bool] query does not support [" + currentFieldName + "]");
}
}
}

View File

@ -66,7 +66,7 @@ public class BoostingQueryParser implements QueryParser {
negativeQuery = parseContext.parseInnerQuery();
negativeQueryFound = true;
} else {
throw new QueryParsingException(parseContext.index(), "[boosting] query does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[boosting] query does not support [" + currentFieldName + "]");
}
} else if (token.isValue()) {
if ("negative_boost".equals(currentFieldName) || "negativeBoost".equals(currentFieldName)) {
@ -74,19 +74,19 @@ public class BoostingQueryParser implements QueryParser {
} else if ("boost".equals(currentFieldName)) {
boost = parser.floatValue();
} else {
throw new QueryParsingException(parseContext.index(), "[boosting] query does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[boosting] query does not support [" + currentFieldName + "]");
}
}
}
if (positiveQuery == null && !positiveQueryFound) {
throw new QueryParsingException(parseContext.index(), "[boosting] query requires 'positive' query to be set'");
throw new QueryParsingException(parseContext, "[boosting] query requires 'positive' query to be set'");
}
if (negativeQuery == null && !negativeQueryFound) {
throw new QueryParsingException(parseContext.index(), "[boosting] query requires 'negative' query to be set'");
throw new QueryParsingException(parseContext, "[boosting] query requires 'negative' query to be set'");
}
if (negativeBoost == -1) {
throw new QueryParsingException(parseContext.index(), "[boosting] query requires 'negative_boost' to be set'");
throw new QueryParsingException(parseContext, "[boosting] query requires 'negative_boost' to be set'");
}
// parsers returned null

View File

@ -19,6 +19,9 @@
package org.elasticsearch.index.query;
import org.apache.lucene.index.Term;
import org.apache.lucene.search.BooleanQuery;
import org.apache.lucene.search.similarities.Similarity;
import org.elasticsearch.common.xcontent.XContentBuilder;
import java.io.IOException;

View File

@ -65,7 +65,7 @@ public class CommonTermsQueryParser implements QueryParser {
XContentParser parser = parseContext.parser();
XContentParser.Token token = parser.nextToken();
if (token != XContentParser.Token.FIELD_NAME) {
throw new QueryParsingException(parseContext.index(), "[common] query malformed, no field");
throw new QueryParsingException(parseContext, "[common] query malformed, no field");
}
String fieldName = parser.currentName();
Object value = null;
@ -96,12 +96,13 @@ public class CommonTermsQueryParser implements QueryParser {
} else if ("high_freq".equals(innerFieldName) || "highFreq".equals(innerFieldName)) {
highFreqMinimumShouldMatch = parser.text();
} else {
throw new QueryParsingException(parseContext.index(), "[common] query does not support [" + innerFieldName + "] for [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[common] query does not support [" + innerFieldName
+ "] for [" + currentFieldName + "]");
}
}
}
} else {
throw new QueryParsingException(parseContext.index(), "[common] query does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[common] query does not support [" + currentFieldName + "]");
}
} else if (token.isValue()) {
if ("query".equals(currentFieldName)) {
@ -109,7 +110,7 @@ public class CommonTermsQueryParser implements QueryParser {
} else if ("analyzer".equals(currentFieldName)) {
String analyzer = parser.text();
if (parseContext.analysisService().analyzer(analyzer) == null) {
throw new QueryParsingException(parseContext.index(), "[common] analyzer [" + parser.text() + "] not found");
throw new QueryParsingException(parseContext, "[common] analyzer [" + parser.text() + "] not found");
}
queryAnalyzer = analyzer;
} else if ("disable_coord".equals(currentFieldName) || "disableCoord".equals(currentFieldName)) {
@ -123,7 +124,7 @@ public class CommonTermsQueryParser implements QueryParser {
} else if ("and".equalsIgnoreCase(op)) {
highFreqOccur = BooleanClause.Occur.MUST;
} else {
throw new QueryParsingException(parseContext.index(),
throw new QueryParsingException(parseContext,
"[common] query requires operator to be either 'and' or 'or', not [" + op + "]");
}
} else if ("low_freq_operator".equals(currentFieldName) || "lowFreqOperator".equals(currentFieldName)) {
@ -133,7 +134,7 @@ public class CommonTermsQueryParser implements QueryParser {
} else if ("and".equalsIgnoreCase(op)) {
lowFreqOccur = BooleanClause.Occur.MUST;
} else {
throw new QueryParsingException(parseContext.index(),
throw new QueryParsingException(parseContext,
"[common] query requires operator to be either 'and' or 'or', not [" + op + "]");
}
} else if ("minimum_should_match".equals(currentFieldName) || "minimumShouldMatch".equals(currentFieldName)) {
@ -143,7 +144,7 @@ public class CommonTermsQueryParser implements QueryParser {
} else if ("_name".equals(currentFieldName)) {
queryName = parser.text();
} else {
throw new QueryParsingException(parseContext.index(), "[common] query does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[common] query does not support [" + currentFieldName + "]");
}
}
}
@ -154,13 +155,13 @@ public class CommonTermsQueryParser implements QueryParser {
token = parser.nextToken();
if (token != XContentParser.Token.END_OBJECT) {
throw new QueryParsingException(
parseContext.index(),
parseContext,
"[common] query parsed in simplified form, with direct field name, but included more options than just the field name, possibly use its 'options' form, with 'query' element?");
}
}
if (value == null) {
throw new QueryParsingException(parseContext.index(), "No text specified for text query");
throw new QueryParsingException(parseContext, "No text specified for text query");
}
FieldMapper<?> mapper = null;
String field;

View File

@ -71,7 +71,7 @@ public class ConstantScoreQueryParser implements QueryParser {
query = parseContext.parseInnerQuery();
queryFound = true;
} else {
throw new QueryParsingException(parseContext.index(), "[constant_score] query does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[constant_score] query does not support [" + currentFieldName + "]");
}
} else if (token.isValue()) {
if ("boost".equals(currentFieldName)) {
@ -81,12 +81,12 @@ public class ConstantScoreQueryParser implements QueryParser {
} else if ("_cache_key".equals(currentFieldName) || "_cacheKey".equals(currentFieldName)) {
cacheKey = new HashedBytesRef(parser.text());
} else {
throw new QueryParsingException(parseContext.index(), "[constant_score] query does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[constant_score] query does not support [" + currentFieldName + "]");
}
}
}
if (!filterFound && !queryFound) {
throw new QueryParsingException(parseContext.index(), "[constant_score] requires either 'filter' or 'query' element");
throw new QueryParsingException(parseContext, "[constant_score] requires either 'filter' or 'query' element");
}
if (query == null && filter == null) {

View File

@ -70,7 +70,7 @@ public class DisMaxQueryParser implements QueryParser {
queries.add(query);
}
} else {
throw new QueryParsingException(parseContext.index(), "[dis_max] query does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[dis_max] query does not support [" + currentFieldName + "]");
}
} else if (token == XContentParser.Token.START_ARRAY) {
if ("queries".equals(currentFieldName)) {
@ -83,7 +83,7 @@ public class DisMaxQueryParser implements QueryParser {
token = parser.nextToken();
}
} else {
throw new QueryParsingException(parseContext.index(), "[dis_max] query does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[dis_max] query does not support [" + currentFieldName + "]");
}
} else {
if ("boost".equals(currentFieldName)) {
@ -93,13 +93,13 @@ public class DisMaxQueryParser implements QueryParser {
} else if ("_name".equals(currentFieldName)) {
queryName = parser.text();
} else {
throw new QueryParsingException(parseContext.index(), "[dis_max] query does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[dis_max] query does not support [" + currentFieldName + "]");
}
}
}
if (!queriesFound) {
throw new QueryParsingException(parseContext.index(), "[dis_max] requires 'queries' field");
throw new QueryParsingException(parseContext, "[dis_max] requires 'queries' field");
}
if (queries.isEmpty()) {

View File

@ -23,8 +23,6 @@ import org.apache.lucene.search.BooleanClause;
import org.apache.lucene.search.BooleanQuery;
import org.apache.lucene.search.Filter;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.QueryWrapperFilter;
import org.apache.lucene.search.TermRangeFilter;
import org.apache.lucene.search.TermRangeQuery;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.lucene.HashedBytesRef;
@ -71,13 +69,13 @@ public class ExistsFilterParser implements FilterParser {
} else if ("_name".equals(currentFieldName)) {
filterName = parser.text();
} else {
throw new QueryParsingException(parseContext.index(), "[exists] filter does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[exists] filter does not support [" + currentFieldName + "]");
}
}
}
if (fieldPattern == null) {
throw new QueryParsingException(parseContext.index(), "exists must be provided with a [field]");
throw new QueryParsingException(parseContext, "exists must be provided with a [field]");
}
return newFilter(parseContext, fieldPattern, filterName);

View File

@ -66,7 +66,7 @@ public class FQueryFilterParser implements FilterParser {
queryFound = true;
query = parseContext.parseInnerQuery();
} else {
throw new QueryParsingException(parseContext.index(), "[fquery] filter does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[fquery] filter does not support [" + currentFieldName + "]");
}
} else if (token.isValue()) {
if ("_name".equals(currentFieldName)) {
@ -76,12 +76,12 @@ public class FQueryFilterParser implements FilterParser {
} else if ("_cache_key".equals(currentFieldName) || "_cacheKey".equals(currentFieldName)) {
cacheKey = new HashedBytesRef(parser.text());
} else {
throw new QueryParsingException(parseContext.index(), "[fquery] filter does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[fquery] filter does not support [" + currentFieldName + "]");
}
}
}
if (!queryFound) {
throw new QueryParsingException(parseContext.index(), "[fquery] requires 'query' element");
throw new QueryParsingException(parseContext, "[fquery] requires 'query' element");
}
if (query == null) {
return null;

View File

@ -64,11 +64,12 @@ public class FieldMaskingSpanQueryParser implements QueryParser {
if ("query".equals(currentFieldName)) {
Query query = parseContext.parseInnerQuery();
if (!(query instanceof SpanQuery)) {
throw new QueryParsingException(parseContext.index(), "[field_masking_span] query] must be of type span query");
throw new QueryParsingException(parseContext, "[field_masking_span] query] must be of type span query");
}
inner = (SpanQuery) query;
} else {
throw new QueryParsingException(parseContext.index(), "[field_masking_span] query does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[field_masking_span] query does not support ["
+ currentFieldName + "]");
}
} else {
if ("boost".equals(currentFieldName)) {
@ -78,15 +79,15 @@ public class FieldMaskingSpanQueryParser implements QueryParser {
} else if ("_name".equals(currentFieldName)) {
queryName = parser.text();
} else {
throw new QueryParsingException(parseContext.index(), "[field_masking_span] query does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[field_masking_span] query does not support [" + currentFieldName + "]");
}
}
}
if (inner == null) {
throw new QueryParsingException(parseContext.index(), "field_masking_span must have [query] span query clause");
throw new QueryParsingException(parseContext, "field_masking_span must have [query] span query clause");
}
if (field == null) {
throw new QueryParsingException(parseContext.index(), "field_masking_span must have [field] set for it");
throw new QueryParsingException(parseContext, "field_masking_span must have [field] set for it");
}
FieldMapper mapper = parseContext.fieldMapper(field);

View File

@ -73,7 +73,7 @@ public class FilteredQueryParser implements QueryParser {
filterFound = true;
filter = parseContext.parseInnerFilter();
} else {
throw new QueryParsingException(parseContext.index(), "[filtered] query does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[filtered] query does not support [" + currentFieldName + "]");
}
} else if (token.isValue()) {
if ("strategy".equals(currentFieldName)) {
@ -93,7 +93,7 @@ public class FilteredQueryParser implements QueryParser {
} else if ("leap_frog_filter_first".equals(value) || "leapFrogFilterFirst".equals(value)) {
filterStrategy = FilteredQuery.LEAP_FROG_FILTER_FIRST_STRATEGY;
} else {
throw new QueryParsingException(parseContext.index(), "[filtered] strategy value not supported [" + value + "]");
throw new QueryParsingException(parseContext, "[filtered] strategy value not supported [" + value + "]");
}
} else if ("_name".equals(currentFieldName)) {
queryName = parser.text();
@ -104,7 +104,7 @@ public class FilteredQueryParser implements QueryParser {
} else if ("_cache_key".equals(currentFieldName) || "_cacheKey".equals(currentFieldName)) {
cacheKey = new HashedBytesRef(parser.text());
} else {
throw new QueryParsingException(parseContext.index(), "[filtered] query does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[filtered] query does not support [" + currentFieldName + "]");
}
}
}

View File

@ -57,7 +57,7 @@ public class FuzzyQueryParser implements QueryParser {
XContentParser.Token token = parser.nextToken();
if (token != XContentParser.Token.FIELD_NAME) {
throw new QueryParsingException(parseContext.index(), "[fuzzy] query malformed, no field");
throw new QueryParsingException(parseContext, "[fuzzy] query malformed, no field");
}
String fieldName = parser.currentName();
@ -95,7 +95,7 @@ public class FuzzyQueryParser implements QueryParser {
} else if ("_name".equals(currentFieldName)) {
queryName = parser.text();
} else {
throw new QueryParsingException(parseContext.index(), "[fuzzy] query does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[fuzzy] query does not support [" + currentFieldName + "]");
}
}
}
@ -107,7 +107,7 @@ public class FuzzyQueryParser implements QueryParser {
}
if (value == null) {
throw new QueryParsingException(parseContext.index(), "No value specified for fuzzy query");
throw new QueryParsingException(parseContext, "No value specified for fuzzy query");
}
Query query = null;

View File

@ -147,7 +147,7 @@ public class GeoBoundingBoxFilterParser implements FilterParser {
} else if ("type".equals(currentFieldName)) {
type = parser.text();
} else {
throw new QueryParsingException(parseContext.index(), "[geo_bbox] filter does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[geo_bbox] filter does not support [" + currentFieldName + "]");
}
}
}
@ -169,11 +169,11 @@ public class GeoBoundingBoxFilterParser implements FilterParser {
MapperService.SmartNameFieldMappers smartMappers = parseContext.smartFieldMappers(fieldName);
if (smartMappers == null || !smartMappers.hasMapper()) {
throw new QueryParsingException(parseContext.index(), "failed to find geo_point field [" + fieldName + "]");
throw new QueryParsingException(parseContext, "failed to find geo_point field [" + fieldName + "]");
}
FieldMapper<?> mapper = smartMappers.mapper();
if (!(mapper instanceof GeoPointFieldMapper)) {
throw new QueryParsingException(parseContext.index(), "field [" + fieldName + "] is not a geo_point field");
throw new QueryParsingException(parseContext, "field [" + fieldName + "] is not a geo_point field");
}
GeoPointFieldMapper geoMapper = ((GeoPointFieldMapper) mapper);
@ -184,7 +184,8 @@ public class GeoBoundingBoxFilterParser implements FilterParser {
IndexGeoPointFieldData indexFieldData = parseContext.getForField(mapper);
filter = new InMemoryGeoBoundingBoxFilter(topLeft, bottomRight, indexFieldData);
} else {
throw new QueryParsingException(parseContext.index(), "geo bounding box type [" + type + "] not supported, either 'indexed' or 'memory' are allowed");
throw new QueryParsingException(parseContext, "geo bounding box type [" + type
+ "] not supported, either 'indexed' or 'memory' are allowed");
}
if (cache != null) {

View File

@ -98,7 +98,8 @@ public class GeoDistanceFilterParser implements FilterParser {
} else if (currentName.equals(GeoPointFieldMapper.Names.GEOHASH)) {
GeoHashUtils.decode(parser.text(), point);
} else {
throw new QueryParsingException(parseContext.index(), "[geo_distance] filter does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[geo_distance] filter does not support [" + currentFieldName
+ "]");
}
}
}
@ -141,7 +142,7 @@ public class GeoDistanceFilterParser implements FilterParser {
}
if (vDistance == null) {
throw new QueryParsingException(parseContext.index(), "geo_distance requires 'distance' to be specified");
throw new QueryParsingException(parseContext, "geo_distance requires 'distance' to be specified");
} else if (vDistance instanceof Number) {
distance = DistanceUnit.DEFAULT.convert(((Number) vDistance).doubleValue(), unit);
} else {
@ -155,11 +156,11 @@ public class GeoDistanceFilterParser implements FilterParser {
MapperService.SmartNameFieldMappers smartMappers = parseContext.smartFieldMappers(fieldName);
if (smartMappers == null || !smartMappers.hasMapper()) {
throw new QueryParsingException(parseContext.index(), "failed to find geo_point field [" + fieldName + "]");
throw new QueryParsingException(parseContext, "failed to find geo_point field [" + fieldName + "]");
}
FieldMapper<?> mapper = smartMappers.mapper();
if (!(mapper instanceof GeoPointFieldMapper)) {
throw new QueryParsingException(parseContext.index(), "field [" + fieldName + "] is not a geo_point field");
throw new QueryParsingException(parseContext, "field [" + fieldName + "] is not a geo_point field");
}
GeoPointFieldMapper geoMapper = ((GeoPointFieldMapper) mapper);

View File

@ -196,11 +196,11 @@ public class GeoDistanceRangeFilterParser implements FilterParser {
MapperService.SmartNameFieldMappers smartMappers = parseContext.smartFieldMappers(fieldName);
if (smartMappers == null || !smartMappers.hasMapper()) {
throw new QueryParsingException(parseContext.index(), "failed to find geo_point field [" + fieldName + "]");
throw new QueryParsingException(parseContext, "failed to find geo_point field [" + fieldName + "]");
}
FieldMapper<?> mapper = smartMappers.mapper();
if (!(mapper instanceof GeoPointFieldMapper)) {
throw new QueryParsingException(parseContext.index(), "field [" + fieldName + "] is not a geo_point field");
throw new QueryParsingException(parseContext, "field [" + fieldName + "] is not a geo_point field");
}
GeoPointFieldMapper geoMapper = ((GeoPointFieldMapper) mapper);

View File

@ -96,10 +96,12 @@ public class GeoPolygonFilterParser implements FilterParser {
shell.add(GeoUtils.parseGeoPoint(parser));
}
} else {
throw new QueryParsingException(parseContext.index(), "[geo_polygon] filter does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[geo_polygon] filter does not support [" + currentFieldName
+ "]");
}
} else {
throw new QueryParsingException(parseContext.index(), "[geo_polygon] filter does not support token type [" + token.name() + "] under [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[geo_polygon] filter does not support token type [" + token.name()
+ "] under [" + currentFieldName + "]");
}
}
} else if (token.isValue()) {
@ -113,25 +115,25 @@ public class GeoPolygonFilterParser implements FilterParser {
normalizeLat = parser.booleanValue();
normalizeLon = parser.booleanValue();
} else {
throw new QueryParsingException(parseContext.index(), "[geo_polygon] filter does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[geo_polygon] filter does not support [" + currentFieldName + "]");
}
} else {
throw new QueryParsingException(parseContext.index(), "[geo_polygon] unexpected token type [" + token.name() + "]");
throw new QueryParsingException(parseContext, "[geo_polygon] unexpected token type [" + token.name() + "]");
}
}
if (shell.isEmpty()) {
throw new QueryParsingException(parseContext.index(), "no points defined for geo_polygon filter");
throw new QueryParsingException(parseContext, "no points defined for geo_polygon filter");
} else {
if (shell.size() < 3) {
throw new QueryParsingException(parseContext.index(), "too few points defined for geo_polygon filter");
throw new QueryParsingException(parseContext, "too few points defined for geo_polygon filter");
}
GeoPoint start = shell.get(0);
if (!start.equals(shell.get(shell.size() - 1))) {
shell.add(start);
}
if (shell.size() < 4) {
throw new QueryParsingException(parseContext.index(), "too few points defined for geo_polygon filter");
throw new QueryParsingException(parseContext, "too few points defined for geo_polygon filter");
}
}
@ -143,11 +145,11 @@ public class GeoPolygonFilterParser implements FilterParser {
MapperService.SmartNameFieldMappers smartMappers = parseContext.smartFieldMappers(fieldName);
if (smartMappers == null || !smartMappers.hasMapper()) {
throw new QueryParsingException(parseContext.index(), "failed to find geo_point field [" + fieldName + "]");
throw new QueryParsingException(parseContext, "failed to find geo_point field [" + fieldName + "]");
}
FieldMapper<?> mapper = smartMappers.mapper();
if (!(mapper instanceof GeoPointFieldMapper)) {
throw new QueryParsingException(parseContext.index(), "field [" + fieldName + "] is not a geo_point field");
throw new QueryParsingException(parseContext, "field [" + fieldName + "] is not a geo_point field");
}
IndexGeoPointFieldData indexFieldData = parseContext.getForField(mapper);

View File

@ -113,7 +113,7 @@ public class GeoShapeFilterParser implements FilterParser {
} else if ("relation".equals(currentFieldName)) {
shapeRelation = ShapeRelation.getRelationByName(parser.text());
if (shapeRelation == null) {
throw new QueryParsingException(parseContext.index(), "Unknown shape operation [" + parser.text() + "]");
throw new QueryParsingException(parseContext, "Unknown shape operation [" + parser.text() + "]");
}
} else if ("strategy".equals(currentFieldName)) {
strategyName = parser.text();
@ -134,13 +134,13 @@ public class GeoShapeFilterParser implements FilterParser {
}
}
if (id == null) {
throw new QueryParsingException(parseContext.index(), "ID for indexed shape not provided");
throw new QueryParsingException(parseContext, "ID for indexed shape not provided");
} else if (type == null) {
throw new QueryParsingException(parseContext.index(), "Type for indexed shape not provided");
throw new QueryParsingException(parseContext, "Type for indexed shape not provided");
}
shape = fetchService.fetch(id, type, index, shapePath);
} else {
throw new QueryParsingException(parseContext.index(), "[geo_shape] filter does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[geo_shape] filter does not support [" + currentFieldName + "]");
}
}
}
@ -152,26 +152,26 @@ public class GeoShapeFilterParser implements FilterParser {
} else if ("_cache_key".equals(currentFieldName)) {
cacheKey = new HashedBytesRef(parser.text());
} else {
throw new QueryParsingException(parseContext.index(), "[geo_shape] filter does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[geo_shape] filter does not support [" + currentFieldName + "]");
}
}
}
if (shape == null) {
throw new QueryParsingException(parseContext.index(), "No Shape defined");
throw new QueryParsingException(parseContext, "No Shape defined");
} else if (shapeRelation == null) {
throw new QueryParsingException(parseContext.index(), "No Shape Relation defined");
throw new QueryParsingException(parseContext, "No Shape Relation defined");
}
MapperService.SmartNameFieldMappers smartNameFieldMappers = parseContext.smartFieldMappers(fieldName);
if (smartNameFieldMappers == null || !smartNameFieldMappers.hasMapper()) {
throw new QueryParsingException(parseContext.index(), "Failed to find geo_shape field [" + fieldName + "]");
throw new QueryParsingException(parseContext, "Failed to find geo_shape field [" + fieldName + "]");
}
FieldMapper fieldMapper = smartNameFieldMappers.mapper();
// TODO: This isn't the nicest way to check this
if (!(fieldMapper instanceof GeoShapeFieldMapper)) {
throw new QueryParsingException(parseContext.index(), "Field [" + fieldName + "] is not a geo_shape");
throw new QueryParsingException(parseContext, "Field [" + fieldName + "] is not a geo_shape");
}
GeoShapeFieldMapper shapeFieldMapper = (GeoShapeFieldMapper) fieldMapper;

View File

@ -93,7 +93,7 @@ public class GeoShapeQueryParser implements QueryParser {
} else if ("relation".equals(currentFieldName)) {
shapeRelation = ShapeRelation.getRelationByName(parser.text());
if (shapeRelation == null) {
throw new QueryParsingException(parseContext.index(), "Unknown shape operation [" + parser.text() + " ]");
throw new QueryParsingException(parseContext, "Unknown shape operation [" + parser.text() + " ]");
}
} else if ("indexed_shape".equals(currentFieldName) || "indexedShape".equals(currentFieldName)) {
while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {
@ -112,13 +112,13 @@ public class GeoShapeQueryParser implements QueryParser {
}
}
if (id == null) {
throw new QueryParsingException(parseContext.index(), "ID for indexed shape not provided");
throw new QueryParsingException(parseContext, "ID for indexed shape not provided");
} else if (type == null) {
throw new QueryParsingException(parseContext.index(), "Type for indexed shape not provided");
throw new QueryParsingException(parseContext, "Type for indexed shape not provided");
}
shape = fetchService.fetch(id, type, index, shapePath);
} else {
throw new QueryParsingException(parseContext.index(), "[geo_shape] query does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[geo_shape] query does not support [" + currentFieldName + "]");
}
}
}
@ -128,26 +128,26 @@ public class GeoShapeQueryParser implements QueryParser {
} else if ("_name".equals(currentFieldName)) {
queryName = parser.text();
} else {
throw new QueryParsingException(parseContext.index(), "[geo_shape] query does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[geo_shape] query does not support [" + currentFieldName + "]");
}
}
}
if (shape == null) {
throw new QueryParsingException(parseContext.index(), "No Shape defined");
throw new QueryParsingException(parseContext, "No Shape defined");
} else if (shapeRelation == null) {
throw new QueryParsingException(parseContext.index(), "No Shape Relation defined");
throw new QueryParsingException(parseContext, "No Shape Relation defined");
}
MapperService.SmartNameFieldMappers smartNameFieldMappers = parseContext.smartFieldMappers(fieldName);
if (smartNameFieldMappers == null || !smartNameFieldMappers.hasMapper()) {
throw new QueryParsingException(parseContext.index(), "Failed to find geo_shape field [" + fieldName + "]");
throw new QueryParsingException(parseContext, "Failed to find geo_shape field [" + fieldName + "]");
}
FieldMapper fieldMapper = smartNameFieldMappers.mapper();
// TODO: This isn't the nicest way to check this
if (!(fieldMapper instanceof GeoShapeFieldMapper)) {
throw new QueryParsingException(parseContext.index(), "Field [" + fieldName + "] is not a geo_shape");
throw new QueryParsingException(parseContext, "Field [" + fieldName + "] is not a geo_shape");
}
GeoShapeFieldMapper shapeFieldMapper = (GeoShapeFieldMapper) fieldMapper;

View File

@ -265,22 +265,23 @@ public class GeohashCellFilter {
}
if (geohash == null) {
throw new QueryParsingException(parseContext.index(), "no geohash value provided to geohash_cell filter");
throw new QueryParsingException(parseContext, "no geohash value provided to geohash_cell filter");
}
MapperService.SmartNameFieldMappers smartMappers = parseContext.smartFieldMappers(fieldName);
if (smartMappers == null || !smartMappers.hasMapper()) {
throw new QueryParsingException(parseContext.index(), "failed to find geo_point field [" + fieldName + "]");
throw new QueryParsingException(parseContext, "failed to find geo_point field [" + fieldName + "]");
}
FieldMapper<?> mapper = smartMappers.mapper();
if (!(mapper instanceof GeoPointFieldMapper)) {
throw new QueryParsingException(parseContext.index(), "field [" + fieldName + "] is not a geo_point field");
throw new QueryParsingException(parseContext, "field [" + fieldName + "] is not a geo_point field");
}
GeoPointFieldMapper geoMapper = ((GeoPointFieldMapper) mapper);
if (!geoMapper.isEnableGeohashPrefix()) {
throw new QueryParsingException(parseContext.index(), "can't execute geohash_cell on field [" + fieldName + "], geohash_prefix is not enabled");
throw new QueryParsingException(parseContext, "can't execute geohash_cell on field [" + fieldName
+ "], geohash_prefix is not enabled");
}
if(levels > 0) {

View File

@ -94,7 +94,7 @@ public class HasChildFilterParser implements FilterParser {
} else if ("inner_hits".equals(currentFieldName)) {
innerHits = innerHitsQueryParserHelper.parse(parseContext);
} else {
throw new QueryParsingException(parseContext.index(), "[has_child] filter does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[has_child] filter does not support [" + currentFieldName + "]");
}
} else if (token.isValue()) {
if ("type".equals(currentFieldName) || "child_type".equals(currentFieldName) || "childType".equals(currentFieldName)) {
@ -112,15 +112,15 @@ public class HasChildFilterParser implements FilterParser {
} else if ("max_children".equals(currentFieldName) || "maxChildren".equals(currentFieldName)) {
maxChildren = parser.intValue(true);
} else {
throw new QueryParsingException(parseContext.index(), "[has_child] filter does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[has_child] filter does not support [" + currentFieldName + "]");
}
}
}
if (!queryFound && !filterFound) {
throw new QueryParsingException(parseContext.index(), "[has_child] filter requires 'query' or 'filter' field");
throw new QueryParsingException(parseContext, "[has_child] filter requires 'query' or 'filter' field");
}
if (childType == null) {
throw new QueryParsingException(parseContext.index(), "[has_child] filter requires 'type' field");
throw new QueryParsingException(parseContext, "[has_child] filter requires 'type' field");
}
Query query;
@ -136,7 +136,7 @@ public class HasChildFilterParser implements FilterParser {
DocumentMapper childDocMapper = parseContext.mapperService().documentMapper(childType);
if (childDocMapper == null) {
throw new QueryParsingException(parseContext.index(), "No mapping for for type [" + childType + "]");
throw new QueryParsingException(parseContext, "No mapping for for type [" + childType + "]");
}
if (innerHits != null) {
InnerHitsContext.ParentChildInnerHits parentChildInnerHits = new InnerHitsContext.ParentChildInnerHits(innerHits.v2(), query, null, childDocMapper);
@ -145,7 +145,7 @@ public class HasChildFilterParser implements FilterParser {
}
ParentFieldMapper parentFieldMapper = childDocMapper.parentFieldMapper();
if (!parentFieldMapper.active()) {
throw new QueryParsingException(parseContext.index(), "Type [" + childType + "] does not have parent mapping");
throw new QueryParsingException(parseContext, "Type [" + childType + "] does not have parent mapping");
}
String parentType = parentFieldMapper.type();
@ -154,11 +154,12 @@ public class HasChildFilterParser implements FilterParser {
DocumentMapper parentDocMapper = parseContext.mapperService().documentMapper(parentType);
if (parentDocMapper == null) {
throw new QueryParsingException(parseContext.index(), "[has_child] Type [" + childType + "] points to a non existent parent type [" + parentType + "]");
throw new QueryParsingException(parseContext, "[has_child] Type [" + childType + "] points to a non existent parent type ["
+ parentType + "]");
}
if (maxChildren > 0 && maxChildren < minChildren) {
throw new QueryParsingException(parseContext.index(), "[has_child] 'max_children' is less than 'min_children'");
throw new QueryParsingException(parseContext, "[has_child] 'max_children' is less than 'min_children'");
}
BitDocIdSetFilter nonNestedDocsFilter = null;

View File

@ -92,7 +92,7 @@ public class HasChildQueryParser implements QueryParser {
} else if ("inner_hits".equals(currentFieldName)) {
innerHits = innerHitsQueryParserHelper.parse(parseContext);
} else {
throw new QueryParsingException(parseContext.index(), "[has_child] query does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[has_child] query does not support [" + currentFieldName + "]");
}
} else if (token.isValue()) {
if ("type".equals(currentFieldName) || "child_type".equals(currentFieldName) || "childType".equals(currentFieldName)) {
@ -112,15 +112,15 @@ public class HasChildQueryParser implements QueryParser {
} else if ("_name".equals(currentFieldName)) {
queryName = parser.text();
} else {
throw new QueryParsingException(parseContext.index(), "[has_child] query does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[has_child] query does not support [" + currentFieldName + "]");
}
}
}
if (!queryFound) {
throw new QueryParsingException(parseContext.index(), "[has_child] requires 'query' field");
throw new QueryParsingException(parseContext, "[has_child] requires 'query' field");
}
if (childType == null) {
throw new QueryParsingException(parseContext.index(), "[has_child] requires 'type' field");
throw new QueryParsingException(parseContext, "[has_child] requires 'type' field");
}
Query innerQuery = iq.asQuery(childType);
@ -132,10 +132,10 @@ public class HasChildQueryParser implements QueryParser {
DocumentMapper childDocMapper = parseContext.mapperService().documentMapper(childType);
if (childDocMapper == null) {
throw new QueryParsingException(parseContext.index(), "[has_child] No mapping for for type [" + childType + "]");
throw new QueryParsingException(parseContext, "[has_child] No mapping for for type [" + childType + "]");
}
if (!childDocMapper.parentFieldMapper().active()) {
throw new QueryParsingException(parseContext.index(), "[has_child] Type [" + childType + "] does not have parent mapping");
throw new QueryParsingException(parseContext, "[has_child] Type [" + childType + "] does not have parent mapping");
}
if (innerHits != null) {
@ -146,18 +146,18 @@ public class HasChildQueryParser implements QueryParser {
ParentFieldMapper parentFieldMapper = childDocMapper.parentFieldMapper();
if (!parentFieldMapper.active()) {
throw new QueryParsingException(parseContext.index(), "[has_child] _parent field not configured");
throw new QueryParsingException(parseContext, "[has_child] _parent field not configured");
}
String parentType = parentFieldMapper.type();
DocumentMapper parentDocMapper = parseContext.mapperService().documentMapper(parentType);
if (parentDocMapper == null) {
throw new QueryParsingException(parseContext.index(), "[has_child] Type [" + childType
+ "] points to a non existent parent type [" + parentType + "]");
throw new QueryParsingException(parseContext, "[has_child] Type [" + childType + "] points to a non existent parent type ["
+ parentType + "]");
}
if (maxChildren > 0 && maxChildren < minChildren) {
throw new QueryParsingException(parseContext.index(), "[has_child] 'max_children' is less than 'min_children'");
throw new QueryParsingException(parseContext, "[has_child] 'max_children' is less than 'min_children'");
}
BitDocIdSetFilter nonNestedDocsFilter = null;

View File

@ -83,7 +83,7 @@ public class HasParentFilterParser implements FilterParser {
} else if ("inner_hits".equals(currentFieldName)) {
innerHits = innerHitsQueryParserHelper.parse(parseContext);
} else {
throw new QueryParsingException(parseContext.index(), "[has_parent] filter does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[has_parent] filter does not support [" + currentFieldName + "]");
}
} else if (token.isValue()) {
if ("type".equals(currentFieldName) || "parent_type".equals(currentFieldName) || "parentType".equals(currentFieldName)) {
@ -95,15 +95,15 @@ public class HasParentFilterParser implements FilterParser {
} else if ("_cache_key".equals(currentFieldName) || "_cacheKey".equals(currentFieldName)) {
// noop to be backwards compatible
} else {
throw new QueryParsingException(parseContext.index(), "[has_parent] filter does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[has_parent] filter does not support [" + currentFieldName + "]");
}
}
}
if (!queryFound && !filterFound) {
throw new QueryParsingException(parseContext.index(), "[has_parent] filter requires 'query' or 'filter' field");
throw new QueryParsingException(parseContext, "[has_parent] filter requires 'query' or 'filter' field");
}
if (parentType == null) {
throw new QueryParsingException(parseContext.index(), "[has_parent] filter requires 'parent_type' field");
throw new QueryParsingException(parseContext, "[has_parent] filter requires 'parent_type' field");
}
Query innerQuery;

View File

@ -23,7 +23,6 @@ import org.apache.lucene.search.BooleanQuery;
import org.apache.lucene.search.Filter;
import org.apache.lucene.search.FilteredQuery;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.QueryWrapperFilter;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.collect.Tuple;
import org.elasticsearch.common.inject.Inject;
@ -88,7 +87,7 @@ public class HasParentQueryParser implements QueryParser {
} else if ("inner_hits".equals(currentFieldName)) {
innerHits = innerHitsQueryParserHelper.parse(parseContext);
} else {
throw new QueryParsingException(parseContext.index(), "[has_parent] query does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[has_parent] query does not support [" + currentFieldName + "]");
}
} else if (token.isValue()) {
if ("type".equals(currentFieldName) || "parent_type".equals(currentFieldName) || "parentType".equals(currentFieldName)) {
@ -112,15 +111,15 @@ public class HasParentQueryParser implements QueryParser {
} else if ("_name".equals(currentFieldName)) {
queryName = parser.text();
} else {
throw new QueryParsingException(parseContext.index(), "[has_parent] query does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[has_parent] query does not support [" + currentFieldName + "]");
}
}
}
if (!queryFound) {
throw new QueryParsingException(parseContext.index(), "[has_parent] query requires 'query' field");
throw new QueryParsingException(parseContext, "[has_parent] query requires 'query' field");
}
if (parentType == null) {
throw new QueryParsingException(parseContext.index(), "[has_parent] query requires 'parent_type' field");
throw new QueryParsingException(parseContext, "[has_parent] query requires 'parent_type' field");
}
Query innerQuery = iq.asQuery(parentType);
@ -145,7 +144,8 @@ public class HasParentQueryParser implements QueryParser {
static Query createParentQuery(Query innerQuery, String parentType, boolean score, QueryParseContext parseContext, Tuple<String, SubSearchContext> innerHits) {
DocumentMapper parentDocMapper = parseContext.mapperService().documentMapper(parentType);
if (parentDocMapper == null) {
throw new QueryParsingException(parseContext.index(), "[has_parent] query configured 'parent_type' [" + parentType + "] is not a valid type");
throw new QueryParsingException(parseContext, "[has_parent] query configured 'parent_type' [" + parentType
+ "] is not a valid type");
}
if (innerHits != null) {
@ -169,7 +169,7 @@ public class HasParentQueryParser implements QueryParser {
}
}
if (parentChildIndexFieldData == null) {
throw new QueryParsingException(parseContext.index(), "[has_parent] no _parent field configured");
throw new QueryParsingException(parseContext, "[has_parent] no _parent field configured");
}
Filter parentFilter = null;

View File

@ -68,7 +68,7 @@ public class IdsFilterParser implements FilterParser {
while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {
BytesRef value = parser.utf8BytesOrNull();
if (value == null) {
throw new QueryParsingException(parseContext.index(), "No value specified for term filter");
throw new QueryParsingException(parseContext, "No value specified for term filter");
}
ids.add(value);
}
@ -77,12 +77,12 @@ public class IdsFilterParser implements FilterParser {
while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {
String value = parser.textOrNull();
if (value == null) {
throw new QueryParsingException(parseContext.index(), "No type specified for term filter");
throw new QueryParsingException(parseContext, "No type specified for term filter");
}
types.add(value);
}
} else {
throw new QueryParsingException(parseContext.index(), "[ids] filter does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[ids] filter does not support [" + currentFieldName + "]");
}
} else if (token.isValue()) {
if ("type".equals(currentFieldName) || "_type".equals(currentFieldName)) {
@ -90,13 +90,13 @@ public class IdsFilterParser implements FilterParser {
} else if ("_name".equals(currentFieldName)) {
filterName = parser.text();
} else {
throw new QueryParsingException(parseContext.index(), "[ids] filter does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[ids] filter does not support [" + currentFieldName + "]");
}
}
}
if (!idsProvided) {
throw new QueryParsingException(parseContext.index(), "[ids] filter requires providing a values element");
throw new QueryParsingException(parseContext, "[ids] filter requires providing a values element");
}
if (ids.isEmpty()) {

View File

@ -74,12 +74,12 @@ public class IdsQueryParser implements QueryParser {
(token == XContentParser.Token.VALUE_NUMBER)) {
BytesRef value = parser.utf8BytesOrNull();
if (value == null) {
throw new QueryParsingException(parseContext.index(), "No value specified for term filter");
throw new QueryParsingException(parseContext, "No value specified for term filter");
}
ids.add(value);
} else {
throw new QueryParsingException(parseContext.index(),
"Illegal value for id, expecting a string or number, got: " + token);
throw new QueryParsingException(parseContext, "Illegal value for id, expecting a string or number, got: "
+ token);
}
}
} else if ("types".equals(currentFieldName) || "type".equals(currentFieldName)) {
@ -87,12 +87,12 @@ public class IdsQueryParser implements QueryParser {
while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {
String value = parser.textOrNull();
if (value == null) {
throw new QueryParsingException(parseContext.index(), "No type specified for term filter");
throw new QueryParsingException(parseContext, "No type specified for term filter");
}
types.add(value);
}
} else {
throw new QueryParsingException(parseContext.index(), "[ids] query does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[ids] query does not support [" + currentFieldName + "]");
}
} else if (token.isValue()) {
if ("type".equals(currentFieldName) || "_type".equals(currentFieldName)) {
@ -102,13 +102,13 @@ public class IdsQueryParser implements QueryParser {
} else if ("_name".equals(currentFieldName)) {
queryName = parser.text();
} else {
throw new QueryParsingException(parseContext.index(), "[ids] query does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[ids] query does not support [" + currentFieldName + "]");
}
}
}
if (!idsProvided) {
throw new QueryParsingException(parseContext.index(), "[ids] query, no ids values provided");
throw new QueryParsingException(parseContext, "[ids] query, no ids values provided");
}
if (ids.isEmpty()) {

View File

@ -210,7 +210,7 @@ public class IndexQueryParserService extends AbstractIndexComponent {
} catch (QueryParsingException e) {
throw e;
} catch (Exception e) {
throw new QueryParsingException(index, "Failed to parse", e);
throw new QueryParsingException(getParseContext(), "Failed to parse", e);
} finally {
if (parser != null) {
parser.close();
@ -230,7 +230,7 @@ public class IndexQueryParserService extends AbstractIndexComponent {
} catch (QueryParsingException e) {
throw e;
} catch (Exception e) {
throw new QueryParsingException(index, "Failed to parse", e);
throw new QueryParsingException(getParseContext(), "Failed to parse", e);
} finally {
if (parser != null) {
parser.close();
@ -250,7 +250,7 @@ public class IndexQueryParserService extends AbstractIndexComponent {
} catch (QueryParsingException e) {
throw e;
} catch (Exception e) {
throw new QueryParsingException(index, "Failed to parse", e);
throw new QueryParsingException(context, "Failed to parse", e);
} finally {
if (parser != null) {
parser.close();
@ -266,7 +266,7 @@ public class IndexQueryParserService extends AbstractIndexComponent {
} catch (QueryParsingException e) {
throw e;
} catch (Exception e) {
throw new QueryParsingException(index, "Failed to parse [" + source + "]", e);
throw new QueryParsingException(getParseContext(), "Failed to parse [" + source + "]", e);
} finally {
if (parser != null) {
parser.close();
@ -282,7 +282,7 @@ public class IndexQueryParserService extends AbstractIndexComponent {
try {
return innerParse(context, parser);
} catch (IOException e) {
throw new QueryParsingException(index, "Failed to parse", e);
throw new QueryParsingException(context, "Failed to parse", e);
}
}
@ -359,7 +359,7 @@ public class IndexQueryParserService extends AbstractIndexComponent {
XContentParser qSourceParser = XContentFactory.xContent(querySource).createParser(querySource);
parsedQuery = parse(qSourceParser);
} else {
throw new QueryParsingException(index(), "request does not support [" + fieldName + "]");
throw new QueryParsingException(getParseContext(), "request does not support [" + fieldName + "]");
}
}
}
@ -369,10 +369,10 @@ public class IndexQueryParserService extends AbstractIndexComponent {
} catch (QueryParsingException e) {
throw e;
} catch (Throwable e) {
throw new QueryParsingException(index, "Failed to parse", e);
throw new QueryParsingException(getParseContext(), "Failed to parse", e);
}
throw new QueryParsingException(index(), "Required query is missing");
throw new QueryParsingException(getParseContext(), "Required query is missing");
}
private ParsedQuery innerParse(QueryParseContext parseContext, XContentParser parser) throws IOException, QueryParsingException {

View File

@ -83,30 +83,30 @@ public class IndicesFilterParser implements FilterParser {
noMatchFilter = parseContext.parseInnerFilter();
}
} else {
throw new QueryParsingException(parseContext.index(), "[indices] filter does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[indices] filter does not support [" + currentFieldName + "]");
}
} else if (token == XContentParser.Token.START_ARRAY) {
if ("indices".equals(currentFieldName)) {
if (indicesFound) {
throw new QueryParsingException(parseContext.index(), "[indices] indices or index already specified");
throw new QueryParsingException(parseContext, "[indices] indices or index already specified");
}
indicesFound = true;
Collection<String> indices = new ArrayList<>();
while (parser.nextToken() != XContentParser.Token.END_ARRAY) {
String value = parser.textOrNull();
if (value == null) {
throw new QueryParsingException(parseContext.index(), "[indices] no value specified for 'indices' entry");
throw new QueryParsingException(parseContext, "[indices] no value specified for 'indices' entry");
}
indices.add(value);
}
currentIndexMatchesIndices = matchesIndices(parseContext.index().name(), indices.toArray(new String[indices.size()]));
} else {
throw new QueryParsingException(parseContext.index(), "[indices] filter does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[indices] filter does not support [" + currentFieldName + "]");
}
} else if (token.isValue()) {
if ("index".equals(currentFieldName)) {
if (indicesFound) {
throw new QueryParsingException(parseContext.index(), "[indices] indices or index already specified");
throw new QueryParsingException(parseContext, "[indices] indices or index already specified");
}
indicesFound = true;
currentIndexMatchesIndices = matchesIndices(parseContext.index().name(), parser.text());
@ -120,15 +120,15 @@ public class IndicesFilterParser implements FilterParser {
} else if ("_name".equals(currentFieldName)) {
filterName = parser.text();
} else {
throw new QueryParsingException(parseContext.index(), "[indices] filter does not support [" + currentFieldName + "]");
throw new QueryParsingException(parseContext, "[indices] filter does not support [" + currentFieldName + "]");
}
}
}
if (!filterFound) {
throw new QueryParsingException(parseContext.index(), "[indices] requires 'filter' element");
throw new QueryParsingException(parseContext, "[indices] requires 'filter' element");
}
if (!indicesFound) {
throw new QueryParsingException(parseContext.index(), "[indices] requires 'indices' or 'index' element");
throw new QueryParsingException(parseContext, "[indices] requires 'indices' or 'index' element");
}
Filter chosenFilter;

Some files were not shown because too many files have changed in this diff Show More