This adds a new proxy for RestHandlers and RestControllers so that requests made
to deprecated REST APIs can be automatically logged in the ES logs via the
DeprecationLogger as well as via a "Warning" header (RFC-7234) for all responses.
Calling indicesService.deleteIndex() can trip an assertion if there is an ongoing cluster state applied in
IndicesClusterStateService. This means that the index is possibly deleted after the failMissingShards
check and before we try creating new and updated shards, tripping an assertion that non-existing shards must
have shard state initializing (started in this case).
This change adds the type of the field in the fieldstats response.
It can be one of the following:
* "integer" for byte, short, integer and long
* "float" for float, half-float and double
* "date" for date
* "ip" for ip
* "text" for string, keyword and text.
Closes#17750
This adds a remote option to reindex that looks like
```
curl -POST 'localhost:9200/_reindex?pretty' -d'{
"source": {
"remote": {
"host": "http://otherhost:9200"
},
"index": "target",
"query": {
"match": {
"foo": "bar"
}
}
},
"dest": {
"index": "target"
}
}'
```
This reindex has all of the features of local reindex:
* Using queries to filter what is copied
* Retry on rejection
* Throttle/rethottle
The big advantage of this version is that it goes over the HTTP API
which can be made backwards compatible.
Some things are different:
The query field is sent directly to the other node rather than parsed
on the coordinating node. This should allow it to support constructs
that are invalid on the coordinating node but are valid on the target
node. Mostly, that means old syntax.
This commit renames writeThrowable to writeException. The situation here
stems from the fact that the StreamOutput method for serializing
Exceptions needs to accept Throwables too as Throwables can be the cause
of serialized Exceptions. Yet, we do not serialize Throwables in the
Error sub-hierarchy in a way that they can be deserialized into their
initial type. This leads to an asymmetry in the StreamOutput method for
serializing Exceptions and the StreamInput method for writing
Excpetions. Namely, the former will accept Throwables but the latter
will only return Exceptions. A goal with the stream methods has always
been symmetry in the method names so that serialization/deserialization
routines appear symmetrical in code. It is this asymmetry on the
input/output types for Exceptions on StreamOutput/StreamInput that
clashes with the desired symmetry of naming. Despite this, we should
favor symmetry in the naming of the methods. This commit renames
StreamOutput#writeThrowable to StreamOutput#writeException which leaves
us with Exception StreamInput#readException and void
StreamOutput#writeException(Throwable).
This commit modifies the initial value of the transport client
round-robin index to a random value so that initial requests are more
likely to not all hit the same node.
Relates #14143
This change activates the doc_values on the _size field for indices created after 5.0.0-alpha4.
It also adds a note in the breaking changes that explain the situation and how to get around it.
Closes#18334
Due to some optimization on the netty layer we had quite some code / cruft
added to the TcpTransport to allow for those optimizations. After cleaning
up BytesReference we can now move this optimization into TcpTransport and
have a simple send method on the implementation layer instead. This commit
adds a CompositeBytesReference that also allows message headers to be written
separately which simplify the header code as well since no skips are needed
anymore.
The top-level class Throwable represents all errors and exceptions in
Java. This hierarchy is divided into Error and Exception, the former
being serious problems that applications should not try to catch and the
latter representing exceptional conditions that an application might
want to catch and handle. This commit renames
org.elasticsearch.cli.UserError to org.elasticsearch.UserException to
make its name consistent with where it falls in this hierarchy.
Relates #19254
Today when reading a malformed operation from the translog, we throw an
assertion error that is immediately caught and wrapped into a translog
corrupted exception. This commit replaces this by electing to directly
throw a translog corrupted exception instead.
Additionally, this cleanup also addressed a double-wrapped translog
corrupted exception. Namely, verifying the checksum can throw a translog
corrupted exception which the existing code would catch and wrap again
in a translog corrupted exception.
Relates #19256
We've been slowly improving batch support in `ClusterService` so service won't need to implement this tricky logic themselves. These good changes are blessed but our logging infra didn't catch up and we now log things like:
```
[2016-07-04 21:51:22,318][DEBUG][cluster.service ] [node_sm0] processing [put-mapping [type1],put-mapping [type1]]:
```
Depending on the `source` string this can get quite ugly (mostly in the ZenDiscovery area).
This PR adds some infra to improve logging, keeping the non-batched task the same. As result the above line looks like:
```
[2016-07-04 21:44:45,047][DEBUG][cluster.service ] [node_s0] processing [put-mapping[type0, type0, type0]]: execute
```
ZenDiscovery waiting on join moved from:
```
[2016-07-04 17:09:45,111][DEBUG][cluster.service ] [node_t0] processing [elected_as_master, [1] nodes joined),elected_as_master, [1] nodes joined)]: execute
```
To
```
[2016-07-04 22:03:30,142][DEBUG][cluster.service ] [node_t3] processing [elected_as_master ([3] nodes joined)[{node_t2}{R3hu3uoSQee0B6bkuw8pjw}{p9n28HDJQdiDMdh3tjxA5g}{127.0.0.1}{127.0.0.1:30107}, {node_t1}{ynYQfk7uR8qR5wKIysFlQg}{wa_OKuJHSl-Oyl9Gis-GXg}{127.0.0.1}{127.0.0.1:30106}, {node_t0}{pweq-2T4TlKPrEVAVW6bJw}{NPBSLXSTTguT1So0JsZY8g}{127.0.0.1}{127.0.0.1:30105}]]: execute
```
As a bonus, I removed all `zen-disco` prefixes to sources from that area.
Node IDs are currently randomly generated during node startup. That means they change every time the node is restarted. While this doesn't matter for ES proper, it makes it hard for external services to track nodes. Another, more minor, side effect is that indexing the output of, say, the node stats API results in creating new fields due to node ID being used as keys.
The first approach I considered was to use the node's published address as the base for the id. We already [treat nodes with the same address as the same](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/discovery/zen/NodeJoinController.java#L387) so this is a simple change (see [here](https://github.com/elastic/elasticsearch/compare/master...bleskes:node_persistent_id_based_on_address)). While this is simple and it works for probably most cases, it is not perfect. For example, if after a node restart, the node is not able to bind to the same port (because it's not yet freed by the OS), it will cause the node to still change identity. Also in environments where the host IP can change due to a host restart, identity will not be the same.
Due to those limitation, I opted to go with a different approach where the node id will be persisted in the node's data folder. This has the upside of connecting the id to the nodes data. It also means that the host can be adapted in any way (replace network cards, attach storage to a new VM). I
It does however also have downsides - we now run the risk of two nodes having the same id, if someone copies clones a data folder from one node to another. To mitigate this I changed the semantics of the protection against multiple nodes with the same address to be stricter - it will now reject the incoming join if a node exists with the same id but a different address. Note that if the existing node doesn't respond to pings (i.e., it's not alive) it will be removed and the new node will be accepted when it tries another join.
Last, and most importantly, this change requires that *all* nodes persist data to disk. This is a change from current behavior where only data & master nodes store local files. This is the main reason for marking this PR as breaking.
Other less important notes:
- DummyTransportAddress is removed as we need a unique network address per node. Use `LocalTransportAddress.buildUnique()` instead.
- I renamed `node.add_lid_to_custom_path` to `node.add_lock_id_to_custom_path` to avoid confusion with the node ID which is now part of the `NodeEnvironment` logic.
- I removed the `version` paramater from `MetaDataStateFormat#write` , it wasn't really used and was just in the way :)
- TribeNodes are special in the sense that they do start multiple sub-nodes (previously known as client nodes). Those sub-nodes do not store local files but derive their ID from the parent node id, so they are generated consistently.