We have a compression framework that we use internally, mainly to compress some
xcontent bytes. However it is quite lenient: for instance it relies on the
assumption that detection of the compression format can only be called on either
some compressed xcontent bytes or some raw xcontent bytes, but nothing checks
this. By the way, we are misusing it in BinaryFieldMapper so that if someone
indexes a binary field which happens to have the same header as a LZF stream,
then at read time, we will try to decompress it.
It also simplifies the API by removing block compression (only streaming) and
some code duplication caused by some methods accepting a byte[] and other
methods a BytesReference.
This commit removes unused thorws statements when RuntimeExceptions are
mentioned in the throws statement. It also removes obsolete import statements
for java.lang.IllegalArgumentException and java.lang.IllegalStateException
Today there is a chance that the state version for shard, index or cluster
state goes backwards or is reset on a full restart etc. depending on
several factors not related to the state. To prevent any collisions
with already existing state files and to maintain write-once properties
this change introductes an incremental state ID instead of using the plain
state version. This also fixes a bug when the previous legacy state had a
greater version than the current state which causes an exception on node
startup or if left-over files are present.
Closes#10316
These help a lot when refactoring, upgrading lucene, etc, and
can prevent code duplication (as you get a compile error for outdated stuff).
Closes#9832.
Files.exists(f) && Files.isDirectory(f) -> Files.exists(f)
if (Files.exists(f)) Files.delete(f) -> Files.deleteIfExists(f)
if (!Files.exists(f)) Files.createDirectories(f) -> Files.createDirectories(f)
In a few places where successive i/o ops are done against the same file, convert
to Files.readAttributes().
Closes#9807.
Cleans up the testReusePeerRecovery test as well
The actual fix is in TransportNodesListShardStoreMetaData.java, which
needs to use `nodeEnv.shardDataPaths` instead of `nodeEnv.shardPaths`.
Due to the difficulty in tracking this down, I've added a lot of
additional logging. This also fixes a logging issue in GatewayAllocator
This allows specifying the path an index will be at.
`index.data_path` is specified in the settings when creating an index,
and can not be dynamically changed.
An example request would look like:
POST /myindex
{
"settings": {
"number_of_shards": 2,
"data_path": "/tmp/myindex"
}
}
And would put data in /tmp/myindex/0/index/0 and /tmp/myindex/0/index/1
Since this can be used to write data to arbitrary locations on disk, it
requires enabling the `node.enable_custom_paths` setting in
elasticsearch.yml on all nodes.
Relates to #8976
This allows specifying the path an index will be at.
`index.data_path` is specified in the settings when creating an index,
and can not be dynamically changed.
An example request would look like:
POST /myindex
{
"settings": {
"number_of_shards": 2,
"data_path": "/tmp/myindex"
}
}
And would put data in /tmp/myindex/0/index/0 and /tmp/myindex/0/index/1
Since this can be used to write data to arbitrary locations on disk, it
requires enabling the `node.enable_custom_paths` setting in
elasticsearch.yml on all nodes.
We only have a single gatweway since es 1.3. There is no need to keep all
these abstractsion and nested packages. We can fold most of it into simpler
structures.