Right now, we allow allocation if there is only a single node in the
cluster. it would be nice to fail open when there is only one data node
(instead of only one node total).
closes#9391
Lucene's RateLimiter can do too much sleeping on small values (see also #6018).
The issue here is that calls to "pause" are not properly guarded in "restoreFile".
Instead of simply adding the guard, this commit uses the RateLimitingInputStream similar as for "snapshotFile".
Closes#13828
Today we are relying on calling Store.verify on the closed stream to validate the
checksum. This is still necessary to catch file truncation but for an actually corrupted
file or checksum we can fail early and check the checksum against the actual metadata
once it's been fully written to the VerifyingIndexOutput.
This commit removes and now forbids all uses of
com.google.common.collect.ImmutableCollection across the codebase. This
is one of the final steps in the eventual removal of Guava as a
dependency.
Relates #13224
This commit removes and now forbids all uses of
com.google.common.io.Resources across the codebase. This is one of the
few remaining steps in the eventual removal of Guava as a dependency.
Relates #13224
Closes#13880
Squashed commit of the following:
commit 316a328e5032e580ba840db993d907631334aac0
Author: Robert Muir <rmuir@apache.org>
Date: Wed Sep 30 16:57:47 2015 -0400
windows is terrible
commit 0406b560c58bf833f8d77af9c7cf3386771dd9c5
Author: Robert Muir <rmuir@apache.org>
Date: Wed Sep 30 16:43:09 2015 -0400
Nuke ES_CLASSPATH appending
Out of box, ES expects its stuff to be in particular places. We should not be appending to ES_CLASSPATH, allowing users to specify stuff there, like we do in elasticsearch.bin.sh
If the user sets it, its not going to work out of box.
Closes#13812
commit 415d8972df28eddec322bb6d70100a1993fa95f6
Author: Robert Muir <rmuir@apache.org>
Date: Wed Sep 30 16:26:35 2015 -0400
Fail hard on empty classpath elements.
This can happen easily, if somehow old 1.x shellscripts survive and try to launch 2.x code.
I have the feeling this happens maybe because of packaging upgrades or something.
Either way: we can just fail hard and clear in this situation, rather than the current situation
where CWD might be /, and we might traverse the entire filesystem until we hit an error...
Relates to #13864
When the plugin manager does not find in `plugin-descriptor.properties` the exact same elasticsearch version it was built on
as the current elasticsearch version, it fails with a message like:
```
ERROR: Elasticsearch version [2.0.0-beta1] is too old for plugin [elasticsearch-mapper-attachments]
```
Actually, the message should be:
```
Plugin [elasticsearch-mapper-attachments] is incompatible with Elasticsearch [2.0.0.beta2]. Was designed for version [2.0.0.beta1].
```
The opposite is true. If you try to install a version of a plugin which was built with a newer version of elasticsearch, it will fail the same way:
```
Plugin [elasticsearch-mapper-attachments] is incompatible with Elasticsearch [2.0.0.beta1]. Was designed for version [2.0.0.beta2].
```
We try to get the index shard instance again from the index service on a different
threads while that shard might have already been closed or removed which can cause a NPE
instead of another expected expecption.
This commit shuffels and rewrites some code in RecoverySourceHandler to make it
simpler and more unittestable. This commit doesn't change all parts of this class
neither is it fully tested yet. It's an important part of the infrastrucutre so I started
to make it better tested but I don't want to change everything in one go since it makes
review simpler and more detailed. Future commits will continue cleaning up the class and
add more tests.
If a document is indexed into ES with no id, ES will generate one for it. We used to have an optimization for this case where the engine will not try to resolve the ids of these request in the existing index but immediately try to index them. This optimization has proven to be the source of brittle bugs (solved!) and we disabled it in 1.5, preparing for it to be removed if no performance degradation was found. Since we haven't seen any such degradation we can remove it.
Along with the removal of the optmization, we can remove the autogenerate id flag on indexing requests and the can have duplicate flag. The only downside of the removal of the canHaveDuplicate flag is that we can't make sure any more that when we retry an autogenerated id create operation we will ignore any document already exists exception (See #9125 for background and discussion). To work around this, we don't set the operation to CREATE any more when we generate an id, so the resulting request will never fail when it finds an existing doc but do return a version of 2. I think that's acceptable.
Closes#13857