validation tests for constants
Currently the snapshot flag for Version constants is only set to true
for CURRENT. However, this means that the snapshot state changes from
branch to branch. Instead, snapshot should be "is this version
released?". This change also adds a validation test checking that
ID -> constant and vice versa are correct, and fixes one bug found there
(for an unreleased version).
- don't allow for soft references anymore in the recycler
- remove some abusive thread locals
- don't recycle independently float/double and int/long pages, they are the
same and just interpret bits differently.
Close#9272
BucketAggregationMode used to be part of the framework, now it's only an
implementation detail of the terms, histogram, geohash grid and scripted
aggregators.
Aggregator.estimatedBucketCount() was a complicated way to do the initial sizing
of the data structures, but it did not work very well in practice and was rather
likely to lead to over-sized data-structures causing OOMEs. It's removed now and
all data-structures start with a size of 1 and grow exponentially.
Aggregator.preCollection() is now symetric with postCollection(): it exists on
all aggregation objects where postCollection() also is and recursively calls its
children.
Fixed other minor issues related to generics and exceptions.
Close#9097
The query cache has a mechanism that disables it automatically when
SearchContext.nowInMillis() is used. One issue with that is that the date math
parser always evaluates the current timestamp when parsing a date, even if it
is not needed. As a consequence, whenever you use a date expression in your
queries, the query cache would not be used.
Close#9225
Some of our Java API requests have public setters but their corresponding getters are package private only. This commit makes those getters public as well.
Closes#9273
This change fixes _timestamp's serialization method to write out
`doc_values` and `doc_values_format`, which could already be set,
but would not be written out.
closes#8893closes#8967
Today we give the HTTP status back within the HTTP response itself and within the JSON response as well:
```sh
curl localhost:9200/
```
```js
{
"status" : 200,
"name" : "Red Wolf",
"version" : {
"number" : "2.0.0",
"build_hash" : "6837a61d8a646a2ac7dc8da1ab3c4ab85d60882d",
"build_timestamp" : "2014-08-19T13:55:56Z",
"build_snapshot" : true,
"lucene_version" : "4.9"
},
"tagline" : "You Know, for Search"
}
```
On Windows platforms when JAVA_HOME is not defined, a message is printed on standard output and the bat script is paused until the user press a key. This behavior is not compliant to automated processes where elasticsearch.bat can be executed by another script. This commit adds a new parameter --silent / -s that allow to skip the pause. Also, the error message is directed to standard and error outputs.
Closes#8913
This commits adds a test that simulate disconnecting nodes and dropping requests during the various stages of recovery and solves all the issues that were raised by it. In short:
1) On going recoveries will be scheduled for retry upon network disconnect. The default retry period is 5s (cross node connections are checked every 10s by default).
2) Sometimes the disconnect happens after the target engine has started (but the shard is still in recovery). For simplicity, I opted to restart the recovery from scratch (where little to no files will be copied again, because there were just synced).
3) To protected against dropped requests, a Recovery Monitor was added that fails a recovery if no progress has been made in the last 30m (by default), which is equivalent to the long time outs we use in recovery requests.
4) When a shard fails on a node, we try to assign it to another node. If no such node is available, the shard will remain unassigned, causing the target node to clean any in memory state for it (files on disk remain). At the moment the shard will remain unassigned until another cluster state change happens, which will re-assigned it to the node in question but if no such change happens the shard will remain stuck at unassigned. The commits adds an extra delayed reroute in such cases to make sure the shard will be reassinged
5) Moved all recovery related settings to the RecoverySettings.
Closes#8720
You can now specify `format` in the request definition for most numeric metric aggregations. The exceptions are Percentile_Ranks, Cardinality and Value_Count as the response type of these can be different from the field type so the formatter won't work.
Closes#6812
Today the internal engine closes itself it the engine hits an exception
it can not recover from. This complicates a lot of refcounting issues
if such an exception happens during engine creation. This commit
only markes the engine as failed and let the user close it once the exception
bubbles up. Additionally it rolls back the indexwriter to prevent any changes after
the engine is failed.
I have a field with a `null` [default `_timestamp` value](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-timestamp-field.html#mapping-timestamp-field-default) and when I try to update the mapping I get a server error caused by a `NullPointerException`
```
[2015-01-08 17:28:56,040][DEBUG][action.admin.indices.mapping.put] [...] failed to put mappings on indices [[feed_170_v1, feed_204_v1, feed_229_v1, feed_232_v1, feed_239_v1, feed_248_v1, feed_268_v1, feed_256_v1, feed_272_v1, feed_159_v1, feed_255_v1, feed_164_v1, feed_259_v1, feed_266_v1, feed_188_v1, feed_240_v1, feed_233_v1, feed_13_v1, feed_184_v1, feed_261_v1, feed_267_v1, feed_271_v1, feed_257_v1, feed_172_v1, feed_238_v1, feed_254_v1, feed_223_v1, feed_274_v1, feed_203_v1, feed_269_v1, feed_262_v1, feed_205_v1, feed_168_v1, feed_219_v1, feed_253_v1, feed_251_v1, feed_173_v1, feed_252_v1, feed_210_v1, feed_216_v1, feed_218_v1, feed_118_v1, feed_273_v1, feed_227_v1, feed_166_v1, feed_213_v1, feed_226_v1]], type [history]
java.lang.NullPointerException
at org.elasticsearch.index.mapper.internal.TimestampFieldMapper.merge(TimestampFieldMapper.java:287)
at org.elasticsearch.index.mapper.object.ObjectMapper.merge(ObjectMapper.java:936)
at org.elasticsearch.index.mapper.DocumentMapper.merge(DocumentMapper.java:693)
at org.elasticsearch.cluster.metadata.MetaDataMappingService$4.execute(MetaDataMappingService.java:508)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:329)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:153)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
```
https://github.com/elasticsearch/elasticsearch/blob/v1.4.2/src/main/java/org/elasticsearch/index/mapper/internal/TimestampFieldMapper.java#L286
Looks like the existence of default timestamp is not checked before use. The next line also has the same issue -- uses of default timestamp without checked to see if it's not null.
To reproduce:
```
$ curl -XPUT localhost:9200/twitter2
$ curl -XPUT localhost:9200/twitter2/tweet/_mapping -d '{
"tweet" : {
"_timestamp" : {
"enabled" : true,
"default" : null
}
}
}'
$ curl -XPUT localhost:9200/twitter2/tweet/_mapping -d '{
"tweet" : {
"_timestamp" : {
"enabled" : true,
"default" : null
},
"properties": {
"user": {"type": "string"}
}
}
}'
```
Closes#9204.
(cherry picked from commit 62c6d63)
Before Elasticsearch 1.0, the type was allowed to be passed as the root
element when uploading a document. However, this was ambiguous if the
mappings also contained a field with the same name as the type. The
behavior was changed in 1.0 to not allow this, but a setting was added
for backwards compatibility. This change removes the setting for 2.0.
The header indicates to how many shard copies (primary and replicas shards) a write was supposed to go to, to how many
shard copies to write succeeded and potentially captures shard failures if writing into a replica shard fails.
For async writes it also includes the number of shards a write is still pending.
Closes#7994
This commit removes most of the Engine abstractions and removes
Engine exposure via dependency injection. It also removes the Holder
abstraction and makes the engine itself start at constrcution time.
It removes the start method from the engine entire which means no engine
instances exists until it's started. There is also no way to stop the
engine to restart, it needs to be an entire new Engine
This fix ensures that calls to the GET alias/mappings/settings/warmers APIs return the aliases/mappings/settings/warmers object even if there is no content within them.. This make them consistent with the GET Index API docs and the breaking changes in 1.4 docs
Closes#9148
This currently only adds checks to BaseFuture, but this should already cover
lots of client code. We could add more in the future, like interactions with
the filesystem and so on.
Close#9164
The multi percolate shard responses are collected in an atomic array which uses the shard id is used as index, but the number of shards the multi percolate request was meant to go to was used as size of this array instead the total number of shards an index has. This caused the exception when routing was used.
Closes#6214