Issue #9566 raises the point that setting the number of shards on a closed index can lead to this index not beeing able to open again. This change in documentation is ment to warn the user about this issue.
After phase1 of recovery is completed, we check that all pending mapping changes have been sent to the master and processed by the other nodes. This is needed in order to make sure that the target node has the latest mapping (we just copied over the corresponding lucene files). To make sure we do not miss updates, we do so under a local cluster state update task. At the moment we don't have a timeout when waiting on the task to be completed. If the local node update thread is very busy, this may stall the recovery for too long. This commit adds a timeout (equal to `indices.recovery.internal_action_timeout`) and upgrade the task urgency to `IMMEDIATE`. If we fail to perform the check, we fail the recovery.
Closes#9575
This commit adds more logs around the gateway shard allocation. Any errors while reaching out to nodes to list the local shards are logged in `WARN`. Shard info loading time is logged under DEBUG. Also, we log a `WARN` message if an exception forces a full checksum check during reading the store metadata
Closes#9562
We now have a very useful annotation to mark features or parameters as
experimental. Let's use it! This commit replaces some custom text warnings with
this annotation and adds this annotation to some existing features/parameters:
- inner_hits (unreleased yet)
- terminate_after (released in 1.4)
- per-bucket doc count errors in the terms agg (released in 1.4)
I also tagged with this annotation settings which should either be not needed
(like the ability to evict entries from the filter cache based on time) or that
are too deep into the way that Elasticsearch works like the Directory
implementation or merge settings.
Close#9563
These aggregations are not experimental anymore but some of their parameters
still are:
- `precision_threshold` and `rehash` on `cardinality`
- `compression` on percentiles(-ranks)
Close#9560
That method checks that files were release properly, but also clears a static map holding references to mock directories. Since we iterate on many indexes this created memory pressure.
This commit removes the FlushType entirely and replaces it in the most places with
a simple `Engine#flush()` call. Flushing without committing the translog is now
entirely private to the engine and is only called in one place.
The `full` option and `FlushType.NEW_WRITER` only exists to allow
realtime changes to two settings (`index.codec` and `index.concurrency`).
Those settings are very expert and don't really need to be updateable
in realtime.
When the master publishes a new cluster state it waits (by default) for up to 30s for all nodes to respond. If not it continues to process other pending tasks. At the moment, this timeout is logged under DEBUG but it typically represent a serious issue with one or more of the nodes. We should log it in WARN and give the nodes that failed to respond in a timefly fashion
Closes#9551
This regression was introduced in #6908: the conversion from RandomAccessOrds
to SortedBinaryDocValues goes through Strings while both impls actually work
on BytesRef, so the SortedBinaryDocValues instance could directly return the
BytesRefs returned by the RandomAccessOrds.
Close#9306
We currently have the IndicesRequest interface to mark indices related requests and be able to retrieve the indices they relate to in a generic way. This commit introduces a similar abstraction for requests that manage aliases, to be able to retrieve/replace the aliases they relate to.
Also, IndicesAliasesRequest becomes a CompositeIndicesRequest, as it allows to perform multiple operations (e.g. add/remote multiple aliases). Each single operation (AliasActions) implements now the newly introduced AliasesRequest.
AliasesRequest is also implemented by GetAliasesRequest, which allows to retrieve aliases information.
Closes#9460
In big deployment ClusterState can be large. To make sure we keep reusing objects that were promoted to the Old Gen, ZenDiscovery has an optimization where it tries to reuse existing IndexMetaData object (containing among other things the mappings) from the current cluster state if they didn't change. The comparison currently uses the index name and the metadata version. This is however not enough and we should also check the index uuid. In extreme cases, where cluster state processing is slow and the index in question is deleted and recreated and these operations are batch processed together, we can use the wrong meta data if the version is also identical. This can happen if people create the index with all meta data predefined and no settings were changed.
Closes#9489Closes#9541
When closing an instance of RestClient, the connection manager gets shutdown, which makes it not usable anymore. If that is static, like it is now, no RestClient will work anymore from that moment on. Each instance of RestClient should have its own instance of connection manager
This method is heavy as it builds a bitset out of a DocIdSet in order to be
able to provide random-access. Now that Lucene has removed out-of-order scoring
true random-access is very rarely needed and we could instead return an Bits
instance that wraps the iterator. Ideally, we would use the DISI API directly
but I have to admit that the Bits API is more friendly.
Close#9546
We had a REST test that relied on matching a json response against a regex. It worked but the match wasn't done against the actual json object, but its java map representation converted into a string by calling `toString`. Since all other clients test runners don't work in this case, as they try to match a json object against a regex, we should do the same and prevent it from working.
The current "checkindex" on startup is very very expensive. This is
like running one of the old school hard drive diagnostic checkers and
usually not a good idea.
But we can do a CRC32 verification of files. We don't even need to
open an indexreader to do this, its much more lightweight.
This option (as well as the existing true/false) are randomized in
tests to find problems.
Also fix bug where use of the current option would always leak
an indexwriter lock.
Closes#9183
Histogram aggregation supports an 'offset' option to move bucket boundaries.
In a histogram with buckets of size X these can be moved from 0, X, 2X, 3X,...
by an offset value of Y to Y, X+Y, 2X+Y, 3X+Y... by using the 'offset' option.
The previous 'pre_offset' and 'post_offset' options are removed in favour of
the simplified 'offset' option.
Closes#9417Closes#9505
Until now, there was no possibility to expose infos about configured
transport profiles. This commit adds the ability to expose those
information in the TransportInfo class.
The channel was well as the netty pipeline handler now also contain
the profile they were configured for, as this information cannot be
extracted elsewhere.
In addition, each profile now can set its own publish host and port,
which might be needed in case of portforwarding or using docker.
Closes#9134
This is similar to refresh, if we fail to commit the data we have to fail the
engine since in-ram data is likely discarded. Yet, it's still in translog and might
be recoverable when the node is restarted but we have to treat the engine as failed.
This callback is executed only once, on the master node during an
index's creation. An exception thrown during this listener will cancel
the index creation.
This also adds checks in `IndicesClusterStateService` for the
indexService being null as well as if the `indicesService.createIndex`
throws an exception on data nodes after an index has already been
created.
#8720 introduced a timeout mechanism for ongoing recoveries, based on a last access time variable. In the many iterations on that PR the update of the access time was lost. This adds it back, including a test that should have been there in the first place.
Closes#9506
The query-cache has an optimization to not deserialize the bytes at the shard
level. However this is a bit fragile since it assumes that serialized streams
can be concatenanted (which is not the case with shared strings) and also does
not update the QueryResult object that is held by the SearchContext. So you
need to make sure to use the right one.
With this change, the query cache just deserializes bytes into the QueryResult
object from the context.
Close#9500