These abstractions don't really do anything nor can they be extended.
We can just fold them into IndexModule for now. There are more but they
are tricky due to some test dependencies which I need to resolve first.
Types are still optional, but if you do provide them, they can't be null. Split the existing constructor that accepted nnull into two, one that accepts no arguments, and another one that accepts the types argument, which must be not null.
Also trimmed down different ways of setting ids, some were misleading as they would always add the ids to the existing ones and not set them, the add prefix makes that clear. Left `addIds` method that accepts a varargs argument. Added check for ids not be null.
Today we use a hirachical injector on the shard level for each shard
created. This commit removes the shard level injetor and replaces
it with good old constructor calls. This also removes all shard level plugin
facilities such that plugins can only have node or index level modules.
For plugins that need to track shard lifecycles they should use the relevant
callback from the lifecycle we already provide.
This commit removes and now forbids all uses of
com.google.common.hash.HashCode, com.google.common.hash.HashFunction,
and com.google.common.hash.Hashing across the codebase. This is one of
the few remaining steps in the eventual removal of Guava as a
dependency.
Relates #13224
The fix in #13848 has an off by one issue where the first byte of the checksum
was never written. Unfortunately most tests shadowed the problem and the first
byte of the checksum seems to be very likely a 0 which causes only very rare
failures.
Relates to #13896
Relates to #13848
Right now, we allow allocation if there is only a single node in the
cluster. it would be nice to fail open when there is only one data node
(instead of only one node total).
closes#9391
Lucene's RateLimiter can do too much sleeping on small values (see also #6018).
The issue here is that calls to "pause" are not properly guarded in "restoreFile".
Instead of simply adding the guard, this commit uses the RateLimitingInputStream similar as for "snapshotFile".
Closes#13828
Today we are relying on calling Store.verify on the closed stream to validate the
checksum. This is still necessary to catch file truncation but for an actually corrupted
file or checksum we can fail early and check the checksum against the actual metadata
once it's been fully written to the VerifyingIndexOutput.
This commit removes and now forbids all uses of
com.google.common.collect.ImmutableCollection across the codebase. This
is one of the final steps in the eventual removal of Guava as a
dependency.
Relates #13224
This commit removes and now forbids all uses of
com.google.common.io.Resources across the codebase. This is one of the
few remaining steps in the eventual removal of Guava as a dependency.
Relates #13224
Closes#13880
Squashed commit of the following:
commit 316a328e5032e580ba840db993d907631334aac0
Author: Robert Muir <rmuir@apache.org>
Date: Wed Sep 30 16:57:47 2015 -0400
windows is terrible
commit 0406b560c58bf833f8d77af9c7cf3386771dd9c5
Author: Robert Muir <rmuir@apache.org>
Date: Wed Sep 30 16:43:09 2015 -0400
Nuke ES_CLASSPATH appending
Out of box, ES expects its stuff to be in particular places. We should not be appending to ES_CLASSPATH, allowing users to specify stuff there, like we do in elasticsearch.bin.sh
If the user sets it, its not going to work out of box.
Closes#13812
commit 415d8972df28eddec322bb6d70100a1993fa95f6
Author: Robert Muir <rmuir@apache.org>
Date: Wed Sep 30 16:26:35 2015 -0400
Fail hard on empty classpath elements.
This can happen easily, if somehow old 1.x shellscripts survive and try to launch 2.x code.
I have the feeling this happens maybe because of packaging upgrades or something.
Either way: we can just fail hard and clear in this situation, rather than the current situation
where CWD might be /, and we might traverse the entire filesystem until we hit an error...
Relates to #13864
When the plugin manager does not find in `plugin-descriptor.properties` the exact same elasticsearch version it was built on
as the current elasticsearch version, it fails with a message like:
```
ERROR: Elasticsearch version [2.0.0-beta1] is too old for plugin [elasticsearch-mapper-attachments]
```
Actually, the message should be:
```
Plugin [elasticsearch-mapper-attachments] is incompatible with Elasticsearch [2.0.0.beta2]. Was designed for version [2.0.0.beta1].
```
The opposite is true. If you try to install a version of a plugin which was built with a newer version of elasticsearch, it will fail the same way:
```
Plugin [elasticsearch-mapper-attachments] is incompatible with Elasticsearch [2.0.0.beta1]. Was designed for version [2.0.0.beta2].
```
We try to get the index shard instance again from the index service on a different
threads while that shard might have already been closed or removed which can cause a NPE
instead of another expected expecption.