Due to some optimization on the netty layer we had quite some code / cruft
added to the TcpTransport to allow for those optimizations. After cleaning
up BytesReference we can now move this optimization into TcpTransport and
have a simple send method on the implementation layer instead. This commit
adds a CompositeBytesReference that also allows message headers to be written
separately which simplify the header code as well since no skips are needed
anymore.
The top-level class Throwable represents all errors and exceptions in
Java. This hierarchy is divided into Error and Exception, the former
being serious problems that applications should not try to catch and the
latter representing exceptional conditions that an application might
want to catch and handle. This commit renames
org.elasticsearch.cli.UserError to org.elasticsearch.UserException to
make its name consistent with where it falls in this hierarchy.
Relates #19254
Today when reading a malformed operation from the translog, we throw an
assertion error that is immediately caught and wrapped into a translog
corrupted exception. This commit replaces this by electing to directly
throw a translog corrupted exception instead.
Additionally, this cleanup also addressed a double-wrapped translog
corrupted exception. Namely, verifying the checksum can throw a translog
corrupted exception which the existing code would catch and wrap again
in a translog corrupted exception.
Relates #19256
We've been slowly improving batch support in `ClusterService` so service won't need to implement this tricky logic themselves. These good changes are blessed but our logging infra didn't catch up and we now log things like:
```
[2016-07-04 21:51:22,318][DEBUG][cluster.service ] [node_sm0] processing [put-mapping [type1],put-mapping [type1]]:
```
Depending on the `source` string this can get quite ugly (mostly in the ZenDiscovery area).
This PR adds some infra to improve logging, keeping the non-batched task the same. As result the above line looks like:
```
[2016-07-04 21:44:45,047][DEBUG][cluster.service ] [node_s0] processing [put-mapping[type0, type0, type0]]: execute
```
ZenDiscovery waiting on join moved from:
```
[2016-07-04 17:09:45,111][DEBUG][cluster.service ] [node_t0] processing [elected_as_master, [1] nodes joined),elected_as_master, [1] nodes joined)]: execute
```
To
```
[2016-07-04 22:03:30,142][DEBUG][cluster.service ] [node_t3] processing [elected_as_master ([3] nodes joined)[{node_t2}{R3hu3uoSQee0B6bkuw8pjw}{p9n28HDJQdiDMdh3tjxA5g}{127.0.0.1}{127.0.0.1:30107}, {node_t1}{ynYQfk7uR8qR5wKIysFlQg}{wa_OKuJHSl-Oyl9Gis-GXg}{127.0.0.1}{127.0.0.1:30106}, {node_t0}{pweq-2T4TlKPrEVAVW6bJw}{NPBSLXSTTguT1So0JsZY8g}{127.0.0.1}{127.0.0.1:30105}]]: execute
```
As a bonus, I removed all `zen-disco` prefixes to sources from that area.
Node IDs are currently randomly generated during node startup. That means they change every time the node is restarted. While this doesn't matter for ES proper, it makes it hard for external services to track nodes. Another, more minor, side effect is that indexing the output of, say, the node stats API results in creating new fields due to node ID being used as keys.
The first approach I considered was to use the node's published address as the base for the id. We already [treat nodes with the same address as the same](https://github.com/elastic/elasticsearch/blob/master/core/src/main/java/org/elasticsearch/discovery/zen/NodeJoinController.java#L387) so this is a simple change (see [here](https://github.com/elastic/elasticsearch/compare/master...bleskes:node_persistent_id_based_on_address)). While this is simple and it works for probably most cases, it is not perfect. For example, if after a node restart, the node is not able to bind to the same port (because it's not yet freed by the OS), it will cause the node to still change identity. Also in environments where the host IP can change due to a host restart, identity will not be the same.
Due to those limitation, I opted to go with a different approach where the node id will be persisted in the node's data folder. This has the upside of connecting the id to the nodes data. It also means that the host can be adapted in any way (replace network cards, attach storage to a new VM). I
It does however also have downsides - we now run the risk of two nodes having the same id, if someone copies clones a data folder from one node to another. To mitigate this I changed the semantics of the protection against multiple nodes with the same address to be stricter - it will now reject the incoming join if a node exists with the same id but a different address. Note that if the existing node doesn't respond to pings (i.e., it's not alive) it will be removed and the new node will be accepted when it tries another join.
Last, and most importantly, this change requires that *all* nodes persist data to disk. This is a change from current behavior where only data & master nodes store local files. This is the main reason for marking this PR as breaking.
Other less important notes:
- DummyTransportAddress is removed as we need a unique network address per node. Use `LocalTransportAddress.buildUnique()` instead.
- I renamed `node.add_lid_to_custom_path` to `node.add_lock_id_to_custom_path` to avoid confusion with the node ID which is now part of the `NodeEnvironment` logic.
- I removed the `version` paramater from `MetaDataStateFormat#write` , it wasn't really used and was just in the way :)
- TribeNodes are special in the sense that they do start multiple sub-nodes (previously known as client nodes). Those sub-nodes do not store local files but derive their ID from the parent node id, so they are generated consistently.
Once all of these are migrated we'll be able to remove aggregation's
custom "streams" which function that same as NamedWriteable. It also
allows us to make most of the fields on aggregations final which is
rather nice.
Also starts to migrate MultiBucketAggregation.Bucket to Writeable,
allowing the buckets to have immutable parts.
Once all of these are migrated we'll be able to remove aggregation's
custom "streams" which function that same as NamedWriteable. It also
allows us to make most of the fields on aggregations final which is
rather nice.
Today throughout the codebase, catch throwable is used with reckless
abandon. This is dangerous because the throwable could be a fatal
virtual machine error resulting from an internal error in the JVM, or an
out of memory error or a stack overflow error that leaves the virtual
machine in an unstable and unpredictable state. This commit removes
catch throwable from the codebase and removes the temptation to use it
by modifying listener APIs to receive instances of Exception instead of
the top-level Throwable.
Relates #19231
Rename `fields` to `stored_fields` and add `docvalue_fields`
`stored_fields` parameter will no longer try to retrieve fields from the _source but will only return stored fields.
`fields` will throw an exception if the user uses it.
Add `docvalue_fields` as an adjunct to `fielddata_fields` which is deprecated. `docvalue_fields` will try to load the value from the docvalue and fallback to fielddata cache if docvalues are not enabled on that field.
Closes#18943
With this commit we also propagate the `canTripCircuitBreaker`
setting for the main action in TransportBroadcastByNodeAction.
Previously, we set it only on the additional action added by
this handler.
Primary relocation and indexing concurrently can currently lead to a deadlock situation as indexing operations are blocked on a (bounded) thread pool during the hand-off phase between old and new primary. This change replaces blocking of indexing operations by putting operations that cannot be executed during relocation hand-off in a queue to be executed once relocation completes.
Closes#18553.
The only reason for LifecycleComponent taking a generic type was so that
it could return that type on its start and stop methods. However, this
chaining has no practical necessity. Instead, start and stop can be
void, and a whole bunch of confusing generics disappear.
Before, a repository would maintain an index file (named 'index') per
repository, that contained the current snapshots in the repository.
This file was not atomically written, so repositories had to depend on
listing the blobs in the repository to determine what the current
snapshots are, and only rely on the index file if the repository does
not support the listBlobs operation. This could cause an incorrect view
of the current snapshots in the repository if any prior snapshot delete
operations failed to delete snapshot metadata files.
This commit introduces the atomic writing of the index file, and because
atomic writes are not guaranteed if the file already exists, we write to
a generational index file (index-N, where N is the current generation).
We also maintain an index-latest file that contains the current
generation, for those repositories that cannot list blobs.
Closes#19002
Relates #18156
These are the first aggregations with multiple `InternalAggregation`s
backing the same `AggregationBuilder`. This required a change in the
register method's signature.
NodeService has an "service attributes" map, which is only
set by HttpServer on start/stop. But the only thing it puts in this map
is already available as part of the HttpServer info which is added to
node info requests. This change removes the attributes map and removes
the dependency in HttpServer on NodeService.
BytesReference should be a really simple interface, yet it has a gazillion
ways to achieve the same this. Methods like `#hasArray`, `#toBytesArray`, `#copyBytesArray`
`#toBytesRef` `#bytes` are all really duplicates. This change simplifies the interface
dramatically and makes implementations of it much simpler. All array access has been removed
and is streamlined through a single `#toBytesRef` method. Utility methods to materialize a
compact byte array has been added too for convenience.
Migrates the `stats` and `extended_stats` aggregations and pipeline
aggregations from the special purpose aggregations streams to
`NamedWriteable`. These are the first pipeline aggregations so this
adds the infrastructure to support both streams and `NamedWriteable`s
for pipeline aggregations.
Raise IOException on deleteBlob if the blob doesn't exist
This commit raises an IOException on BlobContainer#deleteBlob
if the blob does not exist, in conformance with the BlobContainer
interface contract. Each implementation of BlobContainer now
conforms to this contract (file system, S3, Azure, HDFS). This
commit also contains blob container tests for each of the
repository implementations.
Closes#18530
We'll migrate to NamedWriteable so we can share code with the rest
of the system. So we can work on this in multiple pull requests without
breaking Elasticsearch in between the commits this change supports
*both* old style `InternalAggregations.stream` serialization and
`NamedWriteable` style serialization. As such it creates about a
half dozen `// NORELEASE` comments that will have to be removed
once the migration is complete.
This also introduces a boolean `transportClient` flag to `SearchModule`
which is used to skip inappropriate registrations for for the
transport client while still registering the things it needs. In
this case that means that the `InternalAggregation` subclasses are
registered with the `NamedWriteableRegistry` but the `AggregationBuilder`
subclasses are not.
Finally, this moves aggregation registration from guice configuration
time to `SearchModule` construction time. This will make it simpler to
work with in the future as we further clean up Elasticsearch's
extension points.
We have long worked to capture different partitioning scenarios in our testing infra. This PR adds a new variant, inspired by the Jepsen blogs, which was forgotten far - namely a partition where one node can still see and be seen by all other nodes. It also updates the resiliency page to better reflect all the work that was done in this area.
Currently there are cases when using TimeIntervalRounding#round() and date1 <
date2 that round(date2) < round(date1). These errors can happen when using a
non-fixed time zone and the values to be rounded are slightly after a time zone
offset change (e.g. DST transition).
Here is an example for the "CET" time zone with a 45 minute rounding interval.
The dates to be rounded are on the left (with utc time stamp), the rounded
values on the right. The error case is marked:
2011-10-30T01:40:00.000+02:00 1319931600000 | 2011-10-30T01:30:00.000+02:00 1319931000000
2011-10-30T02:02:30.000+02:00 1319932950000 | 2011-10-30T01:30:00.000+02:00 1319931000000
2011-10-30T02:25:00.000+02:00 1319934300000 | 2011-10-30T02:15:00.000+02:00 1319933700000
2011-10-30T02:47:30.000+02:00 1319935650000 | 2011-10-30T02:15:00.000+02:00 1319933700000
2011-10-30T02:10:00.000+01:00 1319937000000 | 2011-10-30T01:30:00.000+02:00 1319931000000 *
2011-10-30T02:32:30.000+01:00 1319938350000 | 2011-10-30T02:15:00.000+01:00 1319937300000
2011-10-30T02:55:00.000+01:00 1319939700000 | 2011-10-30T02:15:00.000+01:00 1319937300000
2011-10-30T03:17:30.000+01:00 1319941050000 | 2011-10-30T03:00:00.000+01:00 1319940000000
We should correct this by detecting that we are crossing a transition when
rounding, and in that case pick the largest valid rounded value before the
transition.
This change adds this correction logic to the rounding function and adds this
invariant to the randomized TimeIntervalRounding tests. Also adding the example
test case from above (with corrected behaviour) for illustrative purposes.
Today we have a ton of logic inside the NettyTransport* codebase. The footprint
of the code that has a direct netty dependency is large and alternative implementations
are pretty hard today since they need to know all about our proticol etc.
This change moves most of the code into TCPTransport* baseclasses and moves all
the protocol send code together. The base classes now contain the majority of the logic
while NettyTransport* classes remain to implement the glue code, configuration and optimization.