A voting-only master-eligible node is a node that can participate in master elections but will not act
as a master in the cluster. In particular, a voting-only node can help elect another master-eligible
node as master, and can serve as a tiebreaker in elections. High availability (HA) clusters require at
least three master-eligible nodes, so that if one of the three nodes is down, then the remaining two
can still elect a master amongst them-selves. This only requires one of the two remaining nodes to
have the capability to act as master, but both need to have voting powers. This means that one of
the three master-eligible nodes can be made as voting-only. If this voting-only node is a dedicated
master, a less powerful machine or a smaller heap-size can be chosen for this node. Alternatively, a
voting-only non-dedicated master node can play the role of the third master-eligible node, which
allows running an HA cluster with only two dedicated master nodes.
Closes#14340
Co-authored-by: David Turner <david.turner@elastic.co>
Today the `ClusterFormationFailureHelper` says `... discovery will continue
using ... from last-known cluster state` and lists all the nodes in the
last-known cluster state. In fact we ignore the master-ineligible nodes in the
last-known cluster state during discovery. This commit fixes this by listing
only the master-eligible nodes from the cluster state in this message.
If the master removes the relocating shard, but recovery isn't aware of
it, then we can enter an invalid state where ReplicationTracker does not
include the local shard.
Today we assert that a warning is logged after no more than
`discovery.cluster_formation_warning_timeout`, but the deterministic scheduler
adds a small amount of extra randomness to the timing of future events, causing
the following build to fail:
./gradlew :server:test --tests "org.elasticsearch.cluster.coordination.CoordinatorTests.testLogsWarningPeriodicallyIfClusterNotFormed" -Dtests.seed=DF35C28D4FA9EE2D
This commit adds an allowance for this extra time.
After two recent changes (#38824 and #33888), the _cat/indices API
no longer report information for active recovering indices and
non-replicated closed indices. It also misreport replicated closed
indices that are potentially not authorized for the user.
This commit changes how the cat action works by first using the
Get Settings API in order to resolve authorized indices. It then uses
the Cluster State, Cluster Health and Indices Stats APIs to retrieve
information about the indices.
Closes#39933
This refactors AggregatorTestCase to allow testing mock scripts.
The main change is to QueryShardContext. This was previously mocked,
but to get the ScriptService you have to invoke a final method
which can't be mocked.
Instead, we just create a mostly-empty QueryShardContext and populate
the fields that are needed for testing. It also introduces a few
new helper methods that can be overridden to change the default
behavior a bit.
Most tests should be able to override getMockScriptService() to supply
a ScriptService to the context, which is later used by the aggs.
More complicated tests can override queryShardContextMock() as before.
Adds a test to MaxAggregatorTests to test out the new functionality.
At the end of a peer recovery the primary wants to mark the replica as in-sync. For that the
persisted local checkpoint of the replica needs to have caught up with the global checkpoint on the
primary. If translog durability is set to ASYNC, this means that information about the persisted local
checkpoint can lag on the primary and might need to be explicitly fetched through a global
checkpoint sync action. Unfortunately, that action will only be triggered after 30 seconds, and, even
worse, will only run based on what the in-sync shard copies say (see
IndexShard.maybeSyncGlobalCheckpoint). As the replica has not been marked as in-sync yet, it is
not taken into consideration, and the primary might have its global checkpoint equal to the max seq
no, so it thinks nothing needs to be done.
Closes#43486
Long and Double ValuesSource set the current document on the script
before executing, but Bytes was missing this method call. That meant
it was possible to generate an OutOfBoundsException when using
a "value" script (field + script) on keyword or other bytes
fields.
This adds in the method call, and a few yaml tests to verify correct
behavior.
This commit performs the proper restore of network disruption.
Previously disruptionScheme.stopDisrupting() was called that does not
ensure that connectivity between cluster nodes is restored. The test
was checking that the cluster has green status, but it was not checking
that connectivity between nodes is restored.
Here we switch to internalCluster().clearDisruptionScheme(true) which
performs both checks before returning.
Similar to #42798Closes#42051
(cherry picked from commit cd1ed662f847a0055ede7dfbd325e214ec4d1490)
This commit replaces usages of Streamable with Writeable for the
AcknowledgedResponse and its subclasses, plus associated actions.
Note that where possible response fields were made final and default
constructors were removed.
This is a large PR, but the change is mostly mechanical.
Relates to #34389
Backport of #43414
Unsupported HTTP methods are detected during requests dispatching
which generates an appropriate error response. Sadly, this error is
never sent back to the client because the method of the original
request is checked again in DefaultRestChannel which throws again
an IllegalArgumentException that is never handled.
This pull request changes the DefaultRestChannel so that the latest
exception is swallowed, allowing the error message to be sent back
to the client. It also eagerly adds the objects to close to the toClose
list so that resources are more likely to be released if something
goes wrong during the response creation and sending.
We currently assert that adding deletion tombstones to Lucene must always succeed if it's not a
tragic exception, and the same should also hold true for NOOP tombstones. We rely on this
assumption, as without this, we have the risk of creating gaps in the history, which will break
operation-based recoveries and CCR.
Depending on git configuration, line feed on checked out files may be
platform dependent, which causes problems to some msearch tests as the
line separator must always be `/n`. With this change we move two files
to the test code so that we control exactly what line separator is used,
given that the corresponding tests fail on windows.
Closes#43464
Removes `@TestLogging` annotations in `*DisruptionIT` tests, so that the only
tests with annotations are those with open issues. Also adds links to the open
issues in the remaining cases.
Relates #43403
The current toXContent implementation can fail when the superclasses toXContent
is called (see #43423). This change makes sure that
DefaultShardOperationFailedException#toXContent is final and implementations
need to add special fields in #innerToXContent. All implementations should write
to self-contained xContent objects. Also adding a test for xContent deserialization
to CloseIndexResponseTests.
Closes#43423
Today when searching for an exclusive range the java date math parser rounds up the value
with the granularity of the operation. So when searching for values that are greater than
"now-2M" the parser rounds up the operation to "now-1M". This behavior was introduced when
we migrated to java date but it looks like a bug since the joda math parser rounds up values
but only when a rounding is used. So "now/M" is rounded to "now-1ms" (minus 1ms to get the largest inclusive value)
in the joda parser if the result should be exclusive but no rounding is applied if the input
is a simple operation like "now-1M". This change restores the joda behavior in order to have
a consistent parsing in all versions.
Closes#43277
Currently the fromXContent logic for reindex requests is implemented in
the rest action. This is inconsistent with other requests where the
logic is implemented in the request. Additionally, it requires access to
the rest action in order to parse the request. This commit moves the
logic and tests into the ReindexRequest.
AggregatorTestCase will NPE if only a single, null MappedFieldType
is provided (which is required to simulate an unmapped field). While
it's possible to test unmapped fields by supplying other, non-related
field types... that's clunky and unnecessary. AggregatorTestCase
just needs to filter out null field types when setting up.
Today the `DisruptibleMockTransport` always allows a connection to a node to be
established, and then fails requests sent to that node such as the subsequent
handshake. Since #42342, we log handshake failures on an open connection as a
warning, and this makes the test logs rather noisy. This change fails the
connection attempt first, avoiding these unrealistic warnings.
The test needed adaption after #43205, as the ReplicationTracker now distinguishes between the
knowledge of the persisted global checkpoint and the computed global checkpoint on the primary
Follow-up to #43205
Local and global checkpoints currently do not correctly reflect what's persisted to disk. The issue is
that the local checkpoint is adapted as soon as an operation is processed (but not fsynced yet). This
leaves room for the history below the global checkpoint to still change in case of a crash. As we rely
on global checkpoints for CCR as well as operation-based recoveries, this has the risk of shard
copies / follower clusters going out of sync.
This commit required changing some core classes in the system:
- The LocalCheckpointTracker keeps track now not only of the information whether an operation has
been processed, but also whether that operation has been persisted to disk.
- TranslogWriter now keeps track of the sequence numbers that have not been fsynced yet. Once
they are fsynced, TranslogWriter notifies LocalCheckpointTracker of this.
- ReplicationTracker now keeps track of the persisted local and persisted global checkpoints of all
shard copies when in primary mode. The computed global checkpoint (which represents the
minimum of all persisted local checkpoints of all in-sync shard copies), which was previously stored
in the checkpoint entry for the local shard copy, has been moved to an extra field.
- The periodic global checkpoint sync now also takes async durability into account, where the local
checkpoints on shards only advance when the translog is asynchronously fsynced. This means that
the previous condition to detect inactivity (max sequence number is equal to global checkpoint) is
not sufficient anymore.
- The new index closing API does not work when combined with async durability. The shard
verification step is now requires an additional pre-flight step to fsync the translog, so that the main
verify shard step has the most up-to-date global checkpoint at disposition.
The RemoteClusterService should close the current
RemoteClusterConnection and should build it again if
the seeds are changed, similarly to what is done when
the ping interval or the compression settings are changed.
Closes#37799
The fact that SearchPhaseContext extends ActionListener makes it hard
to reason about when the original listener is notified and to trace
those calls. Also, the corresponding onFailure and onResponse were
only needed in two places, one each, where they can be replaced by a
more intuitive call, like sendSearchResponse for onResponse.
Today the fielddata for global ordinals re-creates docvalues readers of each segment
when building the iterator of a single segment. This is required because the lookup of
global ordinals needs to access the docvalues's TermsEnum of each segment to retrieve
the original terms. This also means that we need to create NxN (where N is the number of segment in the index) docvalues iterators
each time we want to collect global ordinal values. This wasn't an issue in previous versions since docvalues readers are stateless
before 6.0 so they are reused on each segment but now that docvalues are iterators we need to create a new instance each time
we want to access the values. In order to avoid creating too many iterators this change splits
the global ordinals fielddata in two classes, one that is used to cache a single instance per directory reader and one
that is created from the cached instance that can be used by a single consumer. The latter creates the TermsEnum of each segment
once and reuse them to create the segment's iterator. This prevents the creation of all TermsEnums each time we want to access
the value of a single segment, hence reducing the number of docvalues iterator to create to Nx2 (one iterator and one lookup per segment).
This commit removes some very old test logging annotations that appeared
to be added to investigate test failures that are long since closed. If
these are needed, they can be added back on a case-by-case basis with a
comment associating them to a test failure.
It's possible for the passed in `IndexMetaData` to be null (for
instance, cluster state passed in does not have the index in its
metadata) which in turn can cause a `NullPointerException` when
evaluating the conditions for an index. This commit adds null protection
and unit tests for this case.
Resolves#43296
While investigating memory consumption of deeply nested aggregations for #43091
the memory used to keep track of the doc ids and buckets in the BestBucketsDeferringCollector
showed up as one of the main contributor. In my tests half of the memory held in the
BestBucketsDeferringCollector is associated to segments that don't have matching docs
in the selected buckets. This is expected on fields that have a big cardinality since each
bucket can appear in very few segments. By allocating the builders lazily this change
reduces the memory consumption by a factor 2 (from 1GB to 512MB), hence reducing the
impact on gcs for these volatile allocations. This commit also switches the PackedLongValues.Builder
with a RoaringDocIdSet in order to handle very sparse buckets more efficiently.
I ran all my tests on the `geoname` rally track with the following query:
````
{
"size": 0,
"aggs": {
"country_population": {
"terms": {
"size": 100,
"field": "country_code.raw"
},
"aggs": {
"admin1_code": {
"terms": {
"size": 100,
"field": "admin1_code.raw"
},
"aggs": {
"admin2_code": {
"terms": {
"size": 100,
"field": "admin2_code.raw"
},
"aggs": {
"sum_population": {
"sum": {
"field": "population"
}
}
}
}
}
}
}
}
}
}
````
This PR is a backport a of #43214 from v8.0.0
A number of the aggregation base classes have an abstract doEquals() and doHashCode() (e.g. InternalAggregation.java, AbstractPipelineAggregationBuilder.java).
Theoretically this is so the sub-classes can add to the equals/hashCode and don't need to worry about calling super.equals(). In practice, it's mostly just confusing/inconsistent. And if there are more than two levels, we end up with situations like InternalMappedSignificantTerms which has to call super.doEquals() which defeats the point of having these overridable methods.
This PR removes the do versions and just use equals/hashCode ensuring the super when necessary.
* Follow up to #42109:
* Adjust test to only check that interface lookup by name works not actually lookup IPs which is brittle since virtual interfaces can be destroyed/created by Docker while the tests are running
Co-authored-by: Jason Tedor <jason@tedor.me>
* Return 0 for negative "free" and "total" memory reported by the OS
We've had a situation where the MX bean reported negative values for the
free memory of the OS, in those rare cases we want to return a value of
0 rather than blowing up later down the pipeline.
In the event that there is a serialization or creation error with regard
to memory use, this adds asserts so the failure will occur as soon as
possible and give us a better location for investigation.
Resolves#42157
* Fix test passing in invalid memory value
* Fix another test passing in invalid memory value
* Also change mem check in MachineLearning.machineMemoryFromStats
* Add background documentation for why we prevent negative return values
* Clarify comment a bit more
This PR reverts #35230.
Previously, we reply on soft-deletes to fill the mismatch between the
version map and the Lucene index. This is no longer needed after #43202
where we rebuild the version map when opening an engine. Moreover,
PrunePostingsMergePolicy can prune _id of soft-deleted documents out of
order; thus the lookup result including soft-deletes sometimes does not
return the latest version (although it's okay as we only use a valid
result in an engine).
With this change, we use only live documents in Lucene to resolve the
indexing strategy. This is perfectly safe since we keep all deleted
documents after the local checkpoint in the version map.
Closes#42979