This is related to #29023. Additionally at other points we have
discussed a preference for removing the need to unnecessarily block
threads for opening new node connections. This commit lays the groudwork
for this by opening connections asynchronously at the transport level.
We still block, however, this work will make it possible to eventually
remove all blocking on new connections out of the TransportService
and Transport.
We've decided that the bulk, delete, get, index, update, and search APIs should not
contain this request parameter, and we will instead accept both typed and typeless calls.
Today we allow the user to set the minimum size of a voting configuration. On
reflection we would rather this was simply '3' where possible, and we can use
the retirement API to control the removal of nodes more explicitly.
This change replaces the old reconfigurator setting with a new one,
`cluster.auto_shrink_voting_configuration`, which determines whether
Elasticsearch should automatically remove nodes from the voting configuration
or not.
We have seen an improvement when we bumped the timeout from 1s to 5s, but there are still a few failures for this tests. With this commit we bump the timeout to 10 seconds hoping it will stop all the failures.
Today if a wildcard, date-math expression or alias expands/resolves
to an index that is search-throttled we still search it. This is likely
not the desired behavior since it can unexpectedly slow down searches
significantly.
This change adds a new indices option that allows `search`, `count`
and `msearch` to ignore throttled indices by default. Users can
force expansion to throttled indices by using `ignore_throttled=true`
on the rest request to expand also to throttled indices.
Relates to #34352
This test has a bug that got introduced during the refactoring of #32442. With 2 concurrent term increments,
we can only assert under the operation permit that we are in the correct operation term, not that there is
not already another term bump pending.
Closes#34862
If the random query string is "now" by accident _and_ we are also not setting
some field names to use explicitely, then we can hit the "mapped_date" field
from default test setup. This correctly leads to the query being was marked as
not cacheable, but we assume and check so later. This change fixes this rare
edge case by making sure we don't hit the "date" field in this rare cases.
Closes#35183
When the engine is asked for historical operations, we check if some of the requested operations
are not yet refreshed and if so we refresh before returning the operations. The refresh check is
based on capturing the local checkpoint before each refresh and comparing that value to the one
requested when `newChangesSnapshot` was called. If the requested range is above the captured
local checkpoint we issue a refresh.
This can currently cause unneeded extra refreshes if the method is called concurrently which may cause unwanted degradation in indexing performance. This is especially relevant for CCR where we always ask for a range below the global checkpoint. That range is guaranteed to be below the local
checkpoint of the shard and one refresh is enough to serve multiple changes requests.
This commit fixes this by introducing a dedicated mutex to make sure the test for whether a refresh
is needed actually wait for concurrents for concurrent refreshes that were caused by another
change refresh.
Note that this is not a big change in semantics as refreshes are serialized by lucene anyway. I also
opted not to keep the synchronization to the changes snapshot request only even if in theory we
can apply it to all refreshes, not matter where they come from.
This changes the current script.max_size_in_bytes to be dynamic so it can be
set through the cluster settings API. This setting is also applied to inline scripts
in the compile method of ScriptService to prevent excessively long inline
scripts from being compiled. The script length limit is removed from Painless as
this is no longer necessary with the protection in compile.
Currently we create a new netty event loop group for client connections
and all server profiles. Each new group creates new threads for io
processing. This means 2 * num of processors new threads for each group.
A single group should be able to handle all io processing (for the
transports). This also brings the netty module inline with what we do
for nio.
Additionally, this PR renames the worker threads to be the same for
netty and nio.
Currently, we assume that rollback always happens in the test
testRestoreLocalHistoryFromTranslogOnPromotion. However, if the global
checkpoint equals max_seq_no, we won't rollback. This causes the
max_seq_no_of_updates assertion failed because max_seq_no_of_updates
won't be advanced to the global checkpoint. With this commit, we assert
max_seq_no_of_updates in two different paths.
Today we always allocate a full buffer (1024 elements) in a
LuceneChangesSnapshot even though the requesting size is smaller.
With this change, we will use the requesting size as the buffer size if
it's smaller than the default batch size; otherwise uses the default
batch size.
With this commit we differentiate between permanent circuit breaking
exceptions (which require intervention from an operator and should not
be automatically retried) and transient ones (which may heal themselves
eventually and should be retried). Furthermore, the parent circuit
breaker will categorize a circuit breaking exception as either transient
or permanent based on the categorization of memory usage of its child
circuit breakers.
Closes#31986
Relates #34460
Stop passing `Settings` to `AbstractComponent`'s ctor. This allows us to
stop passing around `Settings` in a *ton* of places. While this change
touches many files, it touches them all in fairly small, mechanical
ways, doing a few things per file:
1. Drop the `super(settings);` line on everything that extends
`AbstractComponent`.
2. Drop the `settings` argument to the ctor if it is no longer used.
3. If the file doesn't use `logger` then drop `extends
AbstractComponent` from it.
4. Clean up all compilation failure caused by the `settings` removal
and drop any now unused `settings` isntances and method arguments.
I've intentionally *not* removed the `settings` argument from a few
files:
1. TransportAction
2. AbstractLifecycleComponent
3. BaseRestHandler
These files don't *need* `settings` either, but this change is large
enough as is.
Relates to #34488
* NETWORKING: MockTransportService Wait for Close
* Make `MockTransportService` wait `30s` for close listeners to run before failing the assertion
* Closes#34990
When we connect to remote clusters, there may be a few more routers/firewalls in-between compared to when we connect to nodes in the same cluster. We've experienced cases where firewalls drop connections completely and keep-alives seem not to be enough, or they are not properly configured. With this commit we allow to enable application-level pings specifically from CCS nodes to the selected remote nodes through the new setting `cluster.remote.${clusterAlias}.transport.ping_schedule`. The new setting is similar `transport.ping_schedule` but it does not affect intra-cluster communication, pings are only sent to specific remote cluster when specifically enabled, as they are disabled by default.
Relates to #34405
* DISCOVERY: Cleanup AbstractDisruptionTestCase
* Make the internal test cluster manage minimum master nodes where we used the default of (nodes / 2 + 1) before
* Remove use of the `NodeConfigurationSource` indirection
* Relates #33675
Drops the `Settings` member from `AbstractComponent`, moving it from the
base class on to the classes that use it. For the most part this is a
mechanical change that doesn't drop `Settings` accesses. The one
exception to this is naming threads where it switches from an invocation
that passes `Settings` and extracts the node name to one that explicitly
passes the node name.
This change doesn't drop the `Settings` argument from
`AbstractComponent`'s ctor because this change is big enough as is.
We'll do that in a follow up change.
This commit filters out usage of deprecated tzs by tests. These are
tested separately and should not require checking for warnings on any
test using random timezones.
closes#34188
This commit adds a new single value metric aggregation that calculates
the statistic called median absolute deviation, which is a measure of
variability that works on more types of data than standard deviation
Our calculation of MAD is approximated using t-digests. In the collect
phase, we collect each value visited into a t-digest. In the reduce
phase, we merge all value t-digests, then create a t-digest of
deviations using the first t-digest's median and centroids
Conflicts during the merge:
1. >=140 chars line length fixed for a lot of project files and warnings
for those files are no longer suppressed
2. Node name is removed from AbstractComponent, it’s no longer taken
from settings, but is explicitly passed as constructor argument and
there were quite a few new classes on zen2 branch that require this
change
3. TransportResponseHandler interface changed (new method added) and
Zen2 makes a lot of subclasses in tests
4. Deprecated way of obtaining logger was changed
Bulk Request in High level rest client should be consistent with what is
possible in Rest API, therefore should support global parameters. Global
parameters are passed in URL in Rest API.
Some parameters are mandatory - index, type - and would fail validation
if not provided before before the bulk is executed.
Optional parameters - routing, pipeline.
The usage of these should be consistent across sync/async execution,
bulk processor and BulkRequestBuilder
closes#26026
Deprecates `_source_include` and `_source_exclude` url parameters
in favor of `_source_inclues` and `_source_excludes` because those
are consistent with the rest of Elasticsearch's APIs.
Relates to #22792
Drops the `deprecationLogger` from `AbstractComponent`, moving it to
places where we need it. This saves us from building a bunch of
`DeprecationLogger`s that we don't need.
Relates to #34488
`AbstractComponent` is trouble because its name implies that
*everything* should extend from it. It *is* useful, but maybe too
broadly useful. The things it offers access too, the `Settings` instance
for the entire server and a logger are nice to have around, but not
really needed *everywhere*. The `Settings` instance especially adds a
fair bit of ceremony to testing without any value.
This removes the `nodeName` method from `AbstractComponent` so it is
more clear where we actually need the node name.
* Removed methods are just unused (the exceptions being isGeoPoint() and is
isFloatingPoint() but those could more efficiently be replaced by enum comparisons to simplify the code)
* Remove exceptions aren't thrown
In order to remove Streamable from the codebase, Response objects need
to be read using the Writeable.Reader interface which this change
enables. This change enables the use of Writeable.Reader by adding the
`Action#getResponseReader` method. The default implementation simply
uses the existing `newResponse` method and the readFrom method. As
responses are migrated to the Writeable.Reader interface, Action
classes can be updated to throw an UnsupportedOperationException when
`newResponse` is called and override the `getResponseReader` method.
Relates #34389
This commit adds a new ParentJoinAggregator that implements a join using global ordinals
in a way that can be reused by the `children` and the upcoming `parent` aggregation.
This new aggregator is a refactor of the existing ParentToChildrenAggregator with two main changes:
* It uses a dense bit array instead of a long array when the aggregation does not have any parent.
* It uses a single aggregator per bucket if it is nested under another aggregation.
For the latter case we use a `MultiBucketAggregatorWrapper` in the factory in order to ensure that each
instance of the aggregator handles a single bucket. This is more inlined with the strategy we use for other
aggregations like `terms` aggregation for instance since the number of buckets to handle should be low (thanks to the breadth_first strategy).
This change is also required for #34210 which adds the `parent` aggregation in the parent-join module.
Relates #34508
* Fix linelength suppressions in index.fielddata
* Some lines that were too long were dead code => Removed them and all code that became dead because of it
* Relates #34884
testClusterFormsByScanningPorts is flaky: sometimes in CI it's not possible to
bind to any of the ports we need to in order for the port scanning to work.
This change removes this test, and #34809 describes a better way to test this
behaviour.
After discussing on the team's FixItFriday, we concluded that
static final instance variables that are mutable should be lowercased.
Historically, DeprecationLogger was uppercased more frequently than lowercased.
This is related to #30876. The AbstractSimpleTransportTestCase initiates
many tcp connections. There are normally over 1,000 connections in
TIME_WAIT at the end of the test. This is because every test opens at
least two different transports that connect to each other with 13
channel connection profiles. This commit modifies the default
connection profile used by this test to 6. One connection for each
type, except for REG which gets 2 connections.
* Check self references in metric agg after last doc collection (#33593)
* Revert 0aff5a30c5dbad9f476be14f34b81e2d1991bb0f (#33593)
* Check self refs in metric agg only once in post collection hook (#33593)
* Remove unnecessary mocking (#33593)
The check for null argument is already done in `splitStringByCommaToArray`, hence it can be removed, which allows us to remove the whole splitScrollIds private method.
With this change, we apply the common test config automatically to all
newly created tasks instead of opting in specifically.
For plugin authors using the plugin externally this means that the
configuration will be applied to their RandomizedTestingTasks as well.
The purpose of the task is to simplify setup and make it easier to
change projects that use the `test` task but actually run integration
tests to use a task called `integTest` for clarity, but also because
we may want to configure and run them differently.
E.x. using different levels of concurrency.
Currently, if MetaDataStateFormat.write throws an IOExceptions if there was some problem with persisting state to disk. If an exception is thrown, loadLatestState may read either old state or new state. This is not enough for the Zen2 algorithm. In case of failure, we need to distinguish between 2 cases: storage is left in clean state or storage is left in a dirty state.
If storage is left in the clean state, loadLatestState may read only old state. If storage is left in a dirty state, loadLatestState may read either old or new state.
If an exception occurs when writing the manifest file to disk this distinction is important for Zen2. If storage is clean, the node can continue to be a part of the cluster and may try to accept further cluster state updates (if it fails to accept cluster state updates it will be kicked off from the cluster using different mechanism). But if storage is dirty, the node should be restarted and it will be able to start up successfully only once it successfully re-writes manifest file to disk.
This commit changes MetaDataStateFormat.write signature, replacing IOException with WriteStateException, which “isDirty” method could be used to distinguish between 2 failure cases.
We need to minimise the number of failures, that leave storage in a dirty state. That’s why this PR changes the algorithm that is used to store state to disk. It has the following layout:
1. For the first state location, create and fsync tmp file with state content.
2. For each extra location, copy and fsync tmp file with state content.
2. Atomically rename tmp file in the first location.
3. For each extra location, atomically rename tmp file.
4. For each location, fsync state directory.
5. Perform cleanup of old files, ignoring exceptions.
If an exception occurs in steps 1-3, storage is clearly in the clean state. If an exception occurs in step 5, storage is clearly in dirty state. Exception in step 4 is questionable, there are 2 options:
1. Consider it as a failure. If the first disk fails, state disappears. So this is a failure and storage is in a dirty state.
2. Do not consider it as failure at all, ignore disk failures.
This commit prefers 1st approach and MetaDataTestFormatTests.testFailRandomlyAndReadAnyState tests for disk failures.
Access to special variables _source and _fields were accidentally
removed in recent refactorings. This commit adds them back, along with a
test.
closes#33884
In a future major version, we will be introducing a soft limit on the
number of shards in a cluster based on the number of nodes in the
cluster. This limit will be configurable, and checked on operations
which create or open shards and issue a warning if the operation would
take the cluster over the limit.
There is an option to enable strict enforcement of the limit, which
turns the warnings into errors. In a future release, the option will be
removed and strict enforcement will be the default (and only) behavior.
- Restrict visibility of Aggregators and Factories
- Move PipelineAggregatorBuilders up a level so it is consistent with
AggregatorBuilders
- Checkstyle line length fixes for a few classes
- Minor odds/ends (swapping to method references, formatting, etc)
This commit introduces two corrections to the way simulate?verbose
handles conditionals on processors.
1) Prior to this change when executing simulate?verbose for
processors with conditionals that evaluate to false, that processor
would still be displayed in the result set. What was displayed was
correct, such that no changes to the document occurred. However, if the
conditional evaluates to false, the processor should not even be
displayed.
2) Prior to this change when executing simulate?verbose for
pipeline processors with conditionals, the individual steps would no
longer be displayed. Commit e37e5df addressed the issue, but
failed account for a conditional on the pipeline processor. Since
a pipeline processor can introduce cycles and is effectively a
single processor that encapsulates multiple other processors that
are potentially guarded by a single conditional, special handling is
needed to for pipeline and conditional pipeline processors.
We should delete a job by directly talking to the allocated
task and telling it to shutdown. Today we shut down a job
via the persistent task framework. This is not ideal because,
while the job has been removed from the persistent task
CS, the allocated task continues to live until it gets the
shutdown message.
This means a user can delete a job, immediately delete
the rollup index, and then see new documents appear in
the just-deleted index. This happens because the indexer
in the allocated task is still running and indexes a few
more documents before getting the shutdown command.
In this PR, the transport action is changed to a TransportTasksAction,
and we invoke onCancelled() directly on the matching job.
The race condition still exists after this PR (albeit less likely),
but this was a precursor to fixing the issue and a self-contained
chunk of code. A second PR will followup to fix the race itself.
This fixes a bug about aliases authorization.
That is, a user might see aliases which he is not authorized to see.
This manifests when the user is not authorized to see any aliases
and the `GetAlias` request is empty which normally is a marking
that all aliases are requested. In this case, no aliases should be
returned, but due to this bug, all aliases will have been returned.
The `ignore` set contains entries of type Class<?>, but the check is performed
on Path objects. This always returns false so is useless currently. Looking at
the first commit of this test that already shows this behaviour this never
excluded anything, so it can be removed.
This change introduces stats per processors. Total, time, failed,
current are currently supported. All pipelines will now show all
top level processors that belong to it. Failure processors are not
displayed, however, the time taken to execute the failure chain is part
of the stats for the top level processor.
The processor name is the type of the processor, ordered as defined in
the pipeline. If a tag for the processor is found, then the tag is
appended to the type.
Pipeline processors will have the pipeline name appended to the name of
the name of the processors (before the tag if one exists). If more
then one pipeline is used to process the document, then each pipeline
will carry its own stats. The outer most pipeline will also include the
inner most pipeline stats.
Conditional processors will only included in the stats if the condition evaluates
to true.
Adds checks for parsed geo distance query. It is a bit hack-ish since it
compares with query's toString() output, but it is better than no
checks. The parsed query itself has default visibility, so we cannot
access it here unless we move the test to org.apache.lucene.document
package.
Fixes#34043
This switches from joda time to java time when resolving index names
using date math. This commit also removes two non registered settings
from the code, which could not be used anyway. An unused method was
removed as well.
Relates #27330
* Replace custom type names with _doc in REST examples.
* Avoid using two mapping types in the percolator docs.
* Rename doc -> _doc in the main repository README.
* Also replace some custom type names in the HLRC docs.
* Make accounting circuit breaker settings dynamic
These missed the original property making them dynamic. This fixes the issue so
these can now be set at any time.
Resolves#34368
Integrates the failure detectors with the Connection lifecycle, to fail nodes as soon as:
- a leader detects one of his followers disconnecting.
- a follower detects its leader disconnecting.
This change introduces stats per processors. Total, time, failed,
current are currently supported. All pipelines will now show all
top level processors that belong to it. Failure processors are not
displayed, however, the time taken to execute the failure chain is part
of the stats for the top level processor.
The processor name is the type of the processor, ordered as defined in
the pipeline. If a tag for the processor is found, then the tag is
appended to the type.
Pipeline processors will have the pipeline name appended to the name of
the name of the processors (before the tag if one exists). If more
then one pipeline is used to process the document, then each pipeline
will carry its own stats. The outer most pipeline will also include the
inner most pipeline stats.
Conditional processors will only included in the stats if the condition evaluates
to true.
As master-eligible nodes join or leave the cluster we should give them votes or
take them away, in order to maintain the optimal level of fault-tolerance in
the system. #33924 introduced the `Reconfigurator` to calculate the optimal
configuration of the cluster, and in this change we add the plumbing needed to
actually perform the reconfigurations needed as the cluster grows or shrinks.
Since #34288, we might hit deadlock if the FollowTask has more fetchers
than writers. This can happen in the following scenario:
Suppose the leader has two operations [seq#0, seq#1]; the FollowTask has
two fetchers and one writer.
1. The FollowTask issues two concurrent fetch requests: {from_seq_no: 0,
num_ops:1} and {from_seq_no: 1, num_ops:1} to read seq#0 and seq#1
respectively.
2. The second request which fetches seq#1 completes before, and then it
triggers a write request containing only seq#1.
3. The primary of a follower fails after it has replicated seq#1 to
replicas.
4. Since the old primary did not respond, the FollowTask issues another
write request containing seq#1 (resend the previous write request).
5. The new primary has seq#1 already; thus it won't replicate seq#1 to
replicas but will wait for the global checkpoint to advance at least
seq#1.
The problem is that the FollowTask has only one writer and that writer
is waiting for seq#0 which won't be delivered until the writer completed.
This PR proposes to replicate existing operations with the old primary
term (instead of the current term) on the follower. In particular, when
the following primary detects that it has processed an process already,
it will look up the term of an existing operation with the same seq_no
in the Lucene index, then rewrite that operation with the old term
before replicating it to the following replicas. This approach is
wait-free but requires soft-deletes on the follower.
Relates #34288
When a envelope that crosses the dateline is specified as a part of
geo_shape query is parsed it shouldn't have its left and right points
flipped.
Fixes#34418
The shard suggestion sort uses a different tie-break than the one that is used
to merge different shards responses. The former uses the internal document identifier
when scores are the same whereas the latter compares the surface form first.
Because of this discrepancy some suggestion outputs are linked to the wrong documents
because the merge sort reorders the shard suggestions differently. This change
fixes this bug by duplicating the Lucene collector in order to be able to apply the
same tiebreak strategy than the merge sort. This logic will be removed when
https://issues.apache.org/jira/browse/LUCENE-8529 is fixed.
Closes#34378
Today we rely on the LocalCheckpointTracker to ensure no duplicate when
enabling optimization using max_seq_no_of_updates. The problem is that
the LocalCheckpointTracker is not fully reloaded when opening an engine
with an out-of-order index commit. Suppose the starting commit has seq#0
and seq#2, then the current LocalCheckpointTracker would return "false"
when asking if seq#2 was processed before although seq#2 in the commit.
This change scans the existing sequence numbers in the starting commit,
then marks these as completed in the LocalCheckpointTracker to ensure
the consistent state between LocalCheckpointTracker and Lucene commit.
Exclusion setting `cluster.routing.allocation.exclude._host` default value
is an empty string.
When an exclusion setting is sent with a null value the
o.e.c.s.Setting#innerGetRaw API return an empty string (probably to
avoid a NullPointerException to be raised).
The o.e.c.r.a.d.FilterAllocationDecider class is developed to omit
updates of default values for exclusion setting.
That's why a null exclusion setting value is translated to an empty
string which is equals to the exclusion default value which is
configured to be ignored.
A simple fix would be to not omit default values for exclusion setting
and keep the NullPointerException guard. This is the purpose of this
commit.
Closes#32721
The `term` and `phrase` suggesters have different options to filter candidates
based on their frequencies. The `popular` mode for instance filters candidate
terms that occur in less docs than the original term. However when we compute this threshold
we use the total term frequency of a term instead of the document frequency. This is not inline
with the actual filtering which is always based on the document frequency. This change fixes
this discrepancy and clarifies the meaning of the different frequencies in use in the suggesters.
It also ensures that the threshold doesn't overflow the maximum allowed value (Integer.MAX_VALUE).
Closes#34282
With this commit we cleanup hand-coded duplicate checks in XContent
parsing. They were necessary previously but since we reconfigured the
underlying parser in #22073 and #22225, these checks are obsolete and
were also ineffective unless an undocumented system property has been
set. As we also remove this escape hatch, we can remove the additional
checks as well.
Closes#22253
Relates #34588
In order to stay BWC compatible with joda time, the epoch millis date
formatter needs to parse dates with a dot like `123.45`. This
adds this functionality for the epoch millis parser in the same way as
for the epoch seconds parser. It also adds support for scientific
notations like `1.0e3` and fixes parsing of negative values for epoch
seconds and epoch millis.
We wish to commit a cluster state update after having received a response from
more than half of the master-eligible nodes in the cluster. This is optimal:
requiring either more or fewer votes than half harms resilience. For instance
if we have three master nodes then, we want to be able to commit a cluster
state after receiving responses from any two nodes; requiring responses from
all three is clearly not resilient to the failure of any node, and if we could
commit an update after a response from just one node then that node would be
required for every commit, which is also not resilient.
However, this means we must adjust the configuration (the set of voting nodes
in the cluster) whenever a master-eligible node joins or leaves. The
calculation of the best configuration for the cluster is the job of the
Reconfigurator, introduced here.
When we upgrade an index, we set the settings version upgraded
setting. This should be considered a settings change, and therefore we
need to increment the settings version. This commit addresses that.
Applies our line length guidance for all classes in the server in `lucene`
directories *except* `XMoreLikeThis`. The only long line in
`XMoreLikeThis` says "remove this when we upgrade to Lucene 5. Given
that we're on Lucene 8, this is a little terrifying and deserves another
look.
In remote cluster setup if we see a configured proxy we should set
the seed nodes host name as the `server_name` to trigger SNI based
routing even for seed nodes. Since remote cluster connections are
plain TCP connections we have to set the host manually since the other
side can't take it from the request URL like in the HTTP case.
This also adds some more informative logging to remote cluster connection.
Switch MetaDataStateFormat to Lucene directory abstraction
This commit switches MetaDataStateFormat class to Lucene directory abstraction to make it easier to test MetaDataStateFormat for different IO failures.
This commits also adds different IO failures tests to MetaDataStateFormatTests.
in #28741 RolloverIT fails because we are cutting over to the
next day while the test executes. We assume that this doesn't happen
based on the assertions in the test. This adds a assumeTrue to ensure
we are at least 5 min away form a date-flip.
Closes#28741
`Engine.Searcher` is non-final today which makes it error prone
in the case of wrapping the underlying reader or lucene `IndexSearcher`
like we do in `IndexSearcherWrapper`. Yet, there is no subclass of it yet
that would be dramatic to just drop on the floor. With the start of development
of frozen indices this changed since in #34357 functionality was added to
a subclass which would be dropped if a `IndexSearcherWrapper` is installed on an index.
This change locks down the `Engine.Searcher` to prevent such a functionality trap.
The `AutoFollowTests` needs to restart the clusters between each tests, because
it is using auto follow stats in assertions. Auto follow stats are only reset
by stopping the elected master node.
Extracted the `testGetOperationsBasedOnGlobalSequenceId()` test to its own test, because it just tests the shard changes api.
* Renamed AutoFollowTests to AutoFollowIT, because it is an integration test.
Renamed ShardChangesIT to IndexFollowingIT, because shard changes it the name
of an internal api and isn't a good name for an integration test.
* move creation of NodeConfigurationSource to a seperate method
* Fixes issues after merge, moved assertSeqNos() and assertSameDocIdsOnShards() methods from ESIntegTestCase to InternalTestCluster, so that ccr tests can use these methods too.
This change disallows negative query boosts. Negative scores are not allowed in Lucene 8 so
it is easier to just disallow negative boosts entirely. We should also deprecate negative boosts
in 6x in order to ensure that users are aware when they'll upgrade to ES 7.
Relates #33309
This commit introduces settings version to index metadata. This value is
monotonically increasing and is updated on settings updates. This will
be useful in cross-cluster replication so that we can request settings
updates from the leader only when there is a settings update.
Today when submitting an update settings request to update the number of
replicas with a wildcard that does not match any indices and allow no
indices is set to true, the request ends up being interpreted as
updating the number of replicas for all indices. That is, consider the
following sequence:
PUT /test-index
{
"settings": {
"index.number_of_replicas": 0
}
}
PUT /non-existent-*/_settings?expand_wildcards=open&allow_no_indices=true
{
"settings": {
"index.number_of_replicas": 1
}
}
GET /test-index/_settings
The latter will show that the number of replicas on test-index is now
one. This is surprising, and should be considered a bug.
The underlying problem here is treating no indices in the underlying
methods used to update the routing table and the metadata as meaning all
indices. This commit takes away this assumption. Tests that relied on
this behavior have been changed to no longer rely on this.
A test for this situation is added in UpdateNumberOfReplicasIT.
This removes another two methods from `AbstractComponent`. One isn't
used at all and another is only used in a single class in watcher. I've
moved the method that watcher uses into the single class that uses it.
This commit handles cases testing withLocale and withZone when the zone
and locale in question is the same as the special base case. This can
happen sometimes since the locale and zoneids are randomized.
Empty values on keyword fields are filtered by the `map` execution mode
of the `terms` aggregation. This commit restores them as valid buckets.
Closes#34434
This commit removes randomization of locale for DateFormatter equals
tests, instead using explicit locales. The test framework already
randomizes locales, so the random choice of the second locale can
sometimes be equal to the already chosen locale. Randomization also does
not provide any extra protection, as the equality of DateFormatter does
not implement equality of the locales itself.
closes#34337