ShardActiveResponseHandler doesn't need to hold to an entire cluster state since it only needs to know the cluster state version. It seems that on overloaded systems where nodes are unresponsive holding onto a lot of different cluster states can make the situation worse.
Closes#21394
Ensures pending index-* blobs are deleted when snapshotting. The
index-* blobs are generational files that maintain the snapshots
in the repository. To write these atomically, we first write a
`pending-index-*` blob, then move it to `index-*`, which also deletes
`pending-index-*` in case its not a file-system level move (e.g.
S3 repositories) . For example, to write the 5th generation of the
index blob for the repository, we would first write the bytes to
`pending-index-5` and then move `pending-index-5` to `index-5`. It is
possible that we fail after writing `pending-index-5`, but before
moving it to `index-5` or deleting `pending-index-5`. In this case,
we will have a dangling `pending-index-5` blob laying around. Since
snapshot #5 would have failed, the next snapshot assumes a generation
number of 5, so it tries to write to `index-5`, which first tries to
write to `pending-index-5` before moving the blob to `index-5`. Since
`pending-index-5` is leftover from the previous failure, the snapshot
fails as it cannot overwrite this blob.
This commit solves the problem by first, adding a UUID to the
`pending-index-*` blobs, and secondly, strengthen the logic around
failure to write the `index-*` generational blob to ensure pending
files are deleted on cleanup.
Closes#21462
This change was reverted after it caused random test failures. This was
due to a copy/paste error in the original PR which caused the mock
version of ClusterInfoService to be used whenever the mock *ZenPing* was
used, and the real ClusterInfoService to be used when MockZenPing was
not used.
Currently, pending operations can complete after tests with disruption scheme
completes. This commit waits for the pending operation counter to complete
after the tests are run
* Allows for an array of index template patterns to be provided to an
index template, and rename the field from 'template' to 'index_pattern'.
Closes#20690
When logging a mock exception, Log4j attempts to render the stack
trace. On a mock exception, this will be null and Log4j will hit a
NullPointerException. This NullPointerException will get recorded in the
status logger buffer that we use to ensure that we do not having any
misuses of Log4j in production code. This commit replaces the use of a
mock exception with an actual exception to avoid angering the Log4j
assertions in ESTestCase.
Today when a message is not fully read on a response, we log (among
other details) the handler name. Unfortunately, if the handler is a
wrapper, all that we see is
o.e.t.TransportService$ContextRestoreResponseHandler@7446ba18
completely losing the offending handler. This commit adds an override
for TransportService$ContextRestoreResponseHandler#toString so that the
underlying offender can be discovered.
Relates #21478
Today when handling responses from nodes in TransportNodesAction, if a
node timeouts or some other failure occurs and the action is not
accumulating exceptions, we log a confusing message:
org.elasticsearch.action.admin.cluster.stats.TransportClusterStatsAction]
ignoring unexpected response [null] of type [null], expected
[ClusterStatsNodeResponse] or [FailedNodeException]
Moreover, the original exception is completely lost. Since this log
message is confusing and unhelpful, we can drop it. Instead, we hold
onto the exception and log it at the warn level before dropping it from
the response.
Relates #21476
This commit updates JodaTime to version 2.9.5 that contain a fix for a bug when parsing time zones (see https://github.com/JodaOrg/joda-time/pull/332, https://github.com/JodaOrg/joda-time/issues/386 and https://github.com/JodaOrg/joda-time/issues/373).
It also remove the joda-convert dependency that seems to be unused.
closes#20911
Here is the changelog for 2.9.5:
```
Changes in 2.9.5
----------------
- Add Norwegian period translations [#378]
- Add Duration.dividedBy(long,RoundingMode) [#69, #379]
- DateTimeZone data updated to version 2016i
- Fixed bug where clock read twice when comparing two nulls in DateTimeComparator [#404]
- Fixed minor issues with historic time-zone data [#373]
- Fix bug in time-zone binary search [#332, #386]
The fix in v2.9.2 caused problems when the time-zone being parsed
was not the last element in the input string. New approach uses a
different approach to the problem.
- Update tests for JDK 9 [#394]
- Close buffered reader correctly in zone info compiler [#396]
- Handle locale correctly zone info compiler [#397]
```
The method used to be called `isSourceEmpty`, and was renamed to `hasSource`, but the return value never changed. Updated tests and users accordingly.
Closes#21419
Currently, `executeIndexRequestOnPrimary` and `executeDeleteRequestOnPrimary`
methods used to prepare and execute write operations, modifies the provided
request, updating the version and versionType. This commit makes it the
callers responsibility to update request version and versionType and avoids
mutating the provided request in the execute methods.
This commit introduces a new execution mode for the
`simple_query_string` query, which is intended down the road to be a
replacement for the current _all field.
It now does auto-field-expansion and auto-leniency when the following criteria
are ALL met:
The _all field is disabled
No default_field has been set in the index settings
No fields are specified in the request
Additionally, a user can force the "all-like" execution by setting the
all_fields parameter to true.
When executing in all field mode, the `simple_query_string` query will
look at all the fields in the mapping that are not metafields and can be
searched, and automatically expand the list of fields that are going to
be queried.
Relates to #20925, which is the `query_string` version of this work.
This is basically the same behavior, but for the `simple_query_string`
query.
Relates to #19784
This commit clarifies some log messages for the bootstrap checks. The
message is that the user has limits set that are below the minimums that
Elasticsearch requires.
Relates #21423
This commit ensures that we always restore the thread's original context after execution of
a context preserving runnable. We always wrap runnables in a wrapper that restores the context
at the time it was submitted to the execute method. The ContextPreservingAbstractRunnable
would restore the calling context in the doRun method and then in a try with resources
block would restore the thread's original context. However, the onFailure and onAfter methods
of a AbstractRunnable could modify the thread context and this modified thread context would
continue on as the thread's context after it was returned to the pool and potentially used
for a different purpose.
* Plugins: Convert custom discovery to pull based plugin
This change primarily moves registering custom Discovery implementations
to the pull based DiscoveryPlugin interface. It also keeps the cloud
based discovery plugins re-registering ZenDiscovery under their own name
in order to maintain backwards compatibility. However,
discovery.zen.hosts_provider is changed here to no longer fallback to
discovery.type. Instead, each plugin which previously relied on the
value of discovery.type now sets the hosts_provider to itself if
discovery.type is set to itself, along with a deprecation warning.
This commit cleans up the formatting of some Javadocs in
BootstrapCheck.java, and corrects some docs that had become stale as the
bootstrap checks evolved.
The ClusterState class currently has a mutable volatile field "status" that is only used by the ClusterStateObserver to differentiate between a cluster state that is being applied or one that has already been applied. This commit removes the field from cluster state, making it a truly immutable class. This information is stored instead by ClusterService, which is the only place that should update this field (PublishClusterStateAction was also updating it, but that information was never used anywhere). A new class is introduced (ClusterServiceState) which emcompasses the current cluster state as well as the current status, which is only used by the ClusterStateObserver mechanism.
Applied (almost) the same rules we use to validate index names
to new alias names. The only rule not applies it "must be lowercase".
We have tests that don't follow that rule and I except there are lots
of examples of camelCase alias names in the wild. We can add that
validation but I'm not sure it is worth it.
Closes#20748
Adds an alias that starts with `#` to the BWC index and validates
that you can read from it and remove it. Starting with `#` isn't
allowed after 5.1.0/6.0.0 so we don't create the alias or check it
after those versions.
Before this change Eclipse (4.6.1) would show compile errors in `ClusterRerouteRequestTests` for the elements in `RANDOM_COMMAND_GENERATORS`. This seems to be because the eclipse compiler does not recognised that the elements in the list are all `Supplier`s of bjects that are subclasses of `AllocationCommand`.
This change fixes the problem by adding a generics hint to the `Arrays.toList()` call.
All plugins currently have their own licenses dir for the
dependencyLicenses task, but core disables this and has the check inside
distribution. This may have been better for maven, but for
gradle it makes more sense to just use the dependencyLicenses task that
automatically exists inside :core, and remove the hacked up version that
is inside distribution.
This commit changes the Javadocs in OsProbe.java to take advantage of
the fact that the line-length limit is 140 characters. This change makes
these Javadocs easier to read and easier to maintain.
This commit removes JVMCheck. Previously there were three checks in this
class:
- check for super word bug in JDK 7
- check for G1GC bugs in JDK 8
- check for broken IBM JDKs
The first check is obsolete since we require JDK 8 now. The second check
is refactored into a bootstrap check. The third check is removed since
we do not even support the IBM JDK.
Relates #21389
Before, when requesting to get the snapshots in a repository, if `_all` was
specified for the set of snapshots to get, any in-progress snapshots would
be returned twice. This commit fixes the issue ensuring that each snapshot,
whether in-progress or completed, is only returned once when making a call to
get snapshots (GET /_snapshot/{repository_name}/_all).
This commit also enables `_current` to appear anywhere in
the get snapshots list, and will be used as a pseudo regex to
mean "match any currently running snapshots".
Closes#21335
Today when acquiring load average stats, this method is rather lenient
when reading /proc/loadavg. This commit removes this leniency so that
any such issues are more likely to be exposed to the end-user.
Relates #21380
Adds support for `?slices=N` to reindex which automatically
parallelizes the process using parallel scrolls on `_uid`. Performance
testing sees a 3x performance improvement for simple docs
on decent hardware, maybe 30% performance improvement
for more complex docs. Still compelling, especially because
clusters should be able to get closer to the 3x than the 30%
number.
Closes#20624
Today when writing an unbounded queue_size in the cat thread pool API,
we write null. This commit modifies this so that the output is -1 so
that the output is always present, and always a numeric value.
Relates #21342
Exist requests are supposed to never throw an exception, but rather return true or false depending on whether some resource exists or not. Indices exists does that for indices and accepts wildcard expressions too. The way the api works internally is by resolving indices and catching IndexNotFoundException: if an exception is thrown the index does not exist hence it returns false, otherwise it returns true. That works ok only if ignore_unavailable and allow_no_indices indices options are both set to false, meaning that they are strict and any missing index or wildcard expressions that resolves to no indices will lead to an exception that can be thrown and cause false to be returned.
Unfortunately the indices options have been configurable up until now for this request, meaning that one can set ignore_unavailable or allow_no_indices to true and have the indices exist request return true for indices that really don't exist, which makes very little sense in the context of this api.
This commit removes the indicesOptions setter from the IndicesExistsRequest and makes settable only expandWildcardsOpen and expandWildcardsClosed, hence a subset of the available indices options. This way we can guarantee more consistent behaviour of the indices exists api. We can then remove the ignore_unavailable and allow_no_indices option from indices exists api spec
Today if you start Elasticsearch with the status logger configured to
the warn level, or use a transport client with the default status logger
level, you will see warn messages about deprecation loggers being
created with different message factories and that formatting might be
broken. This happens because the deprecation logger is constructed using
the message factory from its parent, an artifact leftover from the first
Log4j 2 implementation that used a custom message factory. When that
custom message factory was removed, this constructor invocation should
have been changed to not explicitly use the message factory from the
parent. This commit fixes this invocation. However, we also had some
status checking to all tests to ensure that there are no warn status log
messages that might indicate a configuration problem with Log4j 2. These
assertions blow up badly without the fix for the deprecation logger
construction, and also caught a misconfiguration in one of the logging
tests.
Relates #21339
Today we validate the target index name late and therefore don't fail for instance
if the target index already exists and `dry_run=true` was specified. This change
validates the index name before we early terminate if dry_run is set.
Closes#21149