If an operating system reports -1 for the total bytes of a filesystem
path, we should ignore it when capturing the least and most available
statistics.
Relates to #15919
Squashed commit of the following:
commit 5d2258ffeff8a0d156295dcc754ab9b6cbb4b02e
Author: Lee Hinman <lee@writequit.org>
Date: Thu Jan 14 14:14:27 2016 -0700
Change test to test positive total with negative 'free' value
commit 927e61d4b39692fc147220a955b63b291ad80db5
Author: Lee Hinman <lee@writequit.org>
Date: Thu Jan 14 13:09:28 2016 -0700
Skip capturing least/most FS info for an FS with no total
If an operating system reports -1 for the total bytes of a filesystem
path, we should ignore it when capturing the least and most available
statistics.
Relates to #15919
This commit removes the timeout retry mechanism from ShardStateAction
allowing it to instead be handled by the general master channel retry
mechanism. The idea is that if there is a network issue, the master will
miss a ping timeout causing the channel to be closed which will expose
itself via a NodeDisconnectedException. At this point, we can just wait
for a new master and retry, as with any other master channel exception.
This commit moves the handling of channel failures when failing a shard
to o.e.c.a.s.ShardStateAction. This means that shard failure requests
that timeout or occur when there is no master or the master leaves after
the request is sent will now be retried from here. The listener for a
shard failed request will now only be notified upon successful
completion of the shard failed request, or when a catastrophic
non-channel failure occurs.
This commit adds a simulation of the master leaving after a shard
failure request has been sent. In this case, after a new cluster state
is published (simulating a new master having been elected), the request
to fail the shard should be retried.
This commit handles the situation when we are failing a shard and either
no master is known, or the known master left while failing the shard. We
handle this situation by waiting for a new master to be reelected, and
then sending the shard failed request to the new master.
To main concern with the dedicated ingest TP is that there are already many TPs and in the case with beefy nodes we would many more threads. In the case ingest isn't used the all these threads are just idle.
Adding serialization capabilities to RescoreBuilder and make
all QueryRescorer implement NamedWritable, also requiring
all implementations of RescoreBuilder.Rescorer to implement
equals() and hashCode.
In addition, the current rescore mode enumeration is pulled out to a
separate class to make sharing of constants easier between
the query builders XContent rendering coder and the parser.
This change affects get alias, get aliases as well as cat aliases. They all return closed indices too by default. get alias and get aliases also allow to return open indices only through the `expand_wildcards` option (set it to `open`).
Closes#14982
This undocumented setting was mainly used for testing and a safety net for
a new flushOnClose feature back in 1.x. We can now safely remove this setting.
The test usage now uses a mock plugin setting to achive the same.
Today we maintain a lot of settings on the shard level which are all index level settings.
In order to cut over to the new settings API where we register update listener we have to move
all of them on to the index level otherwise we need a way to un-register listeners which is error-prone
and requires additional handling when shards are closed. It's simpler and also more accurate to handle all of
them on the index level where we can trash the entire registry for update listener once the index goes out of scope.
Don't set the suppressed Exception in Translog.closeOnTragicEvent(Exception ex) if it is an
AlreadyClosedException. ACE is thrown by the TranslogWriter and as cause might
contain the Exception that we add the suppressed ACE to. We then end up with a
circular reference where Exception A has a suppressed Exception B that has as cause A.
This would cause a stackoverflow when we try to serialize it.
For a more detailed description see #15941closes#15941
ContextAndHeaders has a massive impact on the core infrastructure since it has to
be manually passed on to all relevant places across threads/network calls etc. For the same reason
it's also very error prone and easily forgotten on potentially relevant APIs.
The new ThreadContext is associated with a ThreadPool (node or transport client) and ensures that
headers and context registered on a current thread are inherited to new threads spawned, send across
the network to be deserialized on the receiver end as well as restored on the response handling thread
once the response is received.