This commits adds a test that simulate disconnecting nodes and dropping requests during the various stages of recovery and solves all the issues that were raised by it. In short:
1) On going recoveries will be scheduled for retry upon network disconnect. The default retry period is 5s (cross node connections are checked every 10s by default).
2) Sometimes the disconnect happens after the target engine has started (but the shard is still in recovery). For simplicity, I opted to restart the recovery from scratch (where little to no files will be copied again, because there were just synced).
3) To protected against dropped requests, a Recovery Monitor was added that fails a recovery if no progress has been made in the last 30m (by default), which is equivalent to the long time outs we use in recovery requests.
4) When a shard fails on a node, we try to assign it to another node. If no such node is available, the shard will remain unassigned, causing the target node to clean any in memory state for it (files on disk remain). At the moment the shard will remain unassigned until another cluster state change happens, which will re-assigned it to the node in question but if no such change happens the shard will remain stuck at unassigned. The commits adds an extra delayed reroute in such cases to make sure the shard will be reassinged
5) Moved all recovery related settings to the RecoverySettings.
Closes#8720
You can now specify `format` in the request definition for most numeric metric aggregations. The exceptions are Percentile_Ranks, Cardinality and Value_Count as the response type of these can be different from the field type so the formatter won't work.
Closes#6812
Today the internal engine closes itself it the engine hits an exception
it can not recover from. This complicates a lot of refcounting issues
if such an exception happens during engine creation. This commit
only markes the engine as failed and let the user close it once the exception
bubbles up. Additionally it rolls back the indexwriter to prevent any changes after
the engine is failed.
I have a field with a `null` [default `_timestamp` value](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-timestamp-field.html#mapping-timestamp-field-default) and when I try to update the mapping I get a server error caused by a `NullPointerException`
```
[2015-01-08 17:28:56,040][DEBUG][action.admin.indices.mapping.put] [...] failed to put mappings on indices [[feed_170_v1, feed_204_v1, feed_229_v1, feed_232_v1, feed_239_v1, feed_248_v1, feed_268_v1, feed_256_v1, feed_272_v1, feed_159_v1, feed_255_v1, feed_164_v1, feed_259_v1, feed_266_v1, feed_188_v1, feed_240_v1, feed_233_v1, feed_13_v1, feed_184_v1, feed_261_v1, feed_267_v1, feed_271_v1, feed_257_v1, feed_172_v1, feed_238_v1, feed_254_v1, feed_223_v1, feed_274_v1, feed_203_v1, feed_269_v1, feed_262_v1, feed_205_v1, feed_168_v1, feed_219_v1, feed_253_v1, feed_251_v1, feed_173_v1, feed_252_v1, feed_210_v1, feed_216_v1, feed_218_v1, feed_118_v1, feed_273_v1, feed_227_v1, feed_166_v1, feed_213_v1, feed_226_v1]], type [history]
java.lang.NullPointerException
at org.elasticsearch.index.mapper.internal.TimestampFieldMapper.merge(TimestampFieldMapper.java:287)
at org.elasticsearch.index.mapper.object.ObjectMapper.merge(ObjectMapper.java:936)
at org.elasticsearch.index.mapper.DocumentMapper.merge(DocumentMapper.java:693)
at org.elasticsearch.cluster.metadata.MetaDataMappingService$4.execute(MetaDataMappingService.java:508)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:329)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:153)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
```
https://github.com/elasticsearch/elasticsearch/blob/v1.4.2/src/main/java/org/elasticsearch/index/mapper/internal/TimestampFieldMapper.java#L286
Looks like the existence of default timestamp is not checked before use. The next line also has the same issue -- uses of default timestamp without checked to see if it's not null.
To reproduce:
```
$ curl -XPUT localhost:9200/twitter2
$ curl -XPUT localhost:9200/twitter2/tweet/_mapping -d '{
"tweet" : {
"_timestamp" : {
"enabled" : true,
"default" : null
}
}
}'
$ curl -XPUT localhost:9200/twitter2/tweet/_mapping -d '{
"tweet" : {
"_timestamp" : {
"enabled" : true,
"default" : null
},
"properties": {
"user": {"type": "string"}
}
}
}'
```
Closes#9204.
(cherry picked from commit 62c6d63)
Before Elasticsearch 1.0, the type was allowed to be passed as the root
element when uploading a document. However, this was ambiguous if the
mappings also contained a field with the same name as the type. The
behavior was changed in 1.0 to not allow this, but a setting was added
for backwards compatibility. This change removes the setting for 2.0.
The header indicates to how many shard copies (primary and replicas shards) a write was supposed to go to, to how many
shard copies to write succeeded and potentially captures shard failures if writing into a replica shard fails.
For async writes it also includes the number of shards a write is still pending.
Closes#7994
This commit removes most of the Engine abstractions and removes
Engine exposure via dependency injection. It also removes the Holder
abstraction and makes the engine itself start at constrcution time.
It removes the start method from the engine entire which means no engine
instances exists until it's started. There is also no way to stop the
engine to restart, it needs to be an entire new Engine
This fix ensures that calls to the GET alias/mappings/settings/warmers APIs return the aliases/mappings/settings/warmers object even if there is no content within them.. This make them consistent with the GET Index API docs and the breaking changes in 1.4 docs
Closes#9148
This currently only adds checks to BaseFuture, but this should already cover
lots of client code. We could add more in the future, like interactions with
the filesystem and so on.
Close#9164
The multi percolate shard responses are collected in an atomic array which uses the shard id is used as index, but the number of shards the multi percolate request was meant to go to was used as size of this array instead the total number of shards an index has. This caused the exception when routing was used.
Closes#6214
Once we delete the the index on a node we are closing all resources
and subsequently need to delete all shards contents from disk. Yet
this happens today under a lock (the shard lock) that needs to be
acquried in order to execute any operation on the shards data
path. We try to delete all the index meta-data once we acquired
all the shard lock but this operation can run into a timeout which causes
the index to remain on disk. Further, all shard data will be left on
disk if the timeout is reached.
This commit removes all the shards data just before the shard lock
is release as the last operation on a shard that belongs to a deleted
index.
additional element per segment.
This commit adds a verbose flag to the _segments api. Currently the
only additional information returned when set to true is the full
ram tree from lucene for each segment.
This is our only cache which is not 'exact' and might allow for stalled results.
Additionally, a similar cache that we have and needs to perform lookups in other
indices in order to run queries is the script index, and for this index we rely
on the filesystem cache, so we should probably do the same with terms filters
lookups.
Close#9056
The `cluster.routing.allocation.balance.primary` setting has caused
a lot of confusion in the past while it has very little benefit form a
shard allocatioon point of view. Users tend to modify this value to
evently distribute primaries across the nodes which is dangerous since
a prmiary flag on it's own can trigger relocations. The primary flag for a shard
is should not have any impact on cluster performance unless the high level feature
suffereing from primary hotspots is buggy. Yet, this setting was intended to be a
tie-breaker which is not necessary anymore since the algorithm is deterministic.
This commit removes this setting entriely.
Today this not populated (null) in these cases. But it would be useful to have
this available, even just for improved error messages.
The trickiest part today is the handling of 1.2.x files written with
lucene 4.8+ which have both ES checksums and lucene ones. This is now simplified:
when the lucene checksum is there, we always use it.
Closes#9152
In some situations the shard balanceing weight delta becomes negative. Yet,
a negative delta is always treated as `well balanced` which is wrong. I wasn't
able to reproduce the issue in any way other than useing the real world data
from issue #9023. This commit adds a fix for absolute deltas as well as a base
test class that allows to build tests or simulations from the cat API output.
Closes#9023
If someone blocks on it and it is interrupted, we throw an ElasticsearchIllegalStateException. We should not set Thread.currentThread().interrupt(); in this case because we already communicate the interrupt through an exception.
Similar to #9001Closes#9141
A couple of changes that triggerred a refactoring in Elasticsearch:
- LUCENE-6148: Accountable.getChildResources returns a collection instead of
a list.
- LUCENE-6121: CachingTokenFilter now propagates reset(), as a result
SimpleQueryParser.newPossiblyAnalyzedQuery has been fixed to not reset both
the underlying stream and the wrapper (otherwise lucene would barf because of
a doubl reset).
- LUCENE-6119: The auto-throttle issue changed a couple of method
names/parameters. It also made
`UpdateSettingsTests.testUpdateMergeMaxThreadCount` dead slow so I muted this
test until we clea up merge throttling to use LUCENE-6119.
Close#9145
In the case you try to merge two settings, one being an array and one being
a field, together, the settings were merged instead of being overridden.
First config
my.value: 1
Second config
my.value: [ 2, 3 ]
If you execute
settingsBuilder().put(settings1).put(settings2).build()
now only values 2,3 will be in the final settings
Closes#8381
Plugin installation failed when bin/, conf/ and plugins/ directories are on different file systems. The method File.move() can't be used to move a non-empty directory between different filesystems.
I didn't find a simple way to unittest that, even with in-memory filesystems like jimfs or the Lucene test framework.
Closes#8999
If bulk index request fails due to a disconnect, unavailable shard etc, the request is
retried once before actually failing. However, even in case of failure the documents
might already be indexed. For autogenerated ids the request must not add the
documents again and therfore canHaveDuplicates must be set to true.
closes#8788