Today we still wrap recovery source readers on merge even if we
keep all documents recovery source. This basically disables bulk
merging for stored fields. This change skips wrapping if all docs
sources are kept anyway.
This change fixes an unreleased bug that trips an assertion because a static instance
shared among threads is modified during the search. This commit copies the static
instance in order to ensure that each thread can modify the value without modifying
the other instances.
Closes#37179Closes#37266
* ingest: compile mustache template only if field includes '{{''
Prior to this change, any field in an ingest node processor that supports
script templates would be compiled as mustache template regardless if they
contain a template or not. Compiling normal text as mustache templates is
harmless. However, each compilation counts against the script compilation
circuit breaker. A large number of processors without any templates or scripts
could un-intuitively trip the too many script compilations circuit breaker.
This change simple checks for '{{' in the text before it attempts to compile.
fixes#37120
Update BucketSortPipelineAggregator to use a List and Collections.sort() for sorting instead of a priority queue. This preserves the order for equal values. Closes#36322.
* TESTS: Real Coordinator in SnapshotServiceTests
* Introduce real coordinator in SnapshotServiceTests to be able to test network disruptions realistically
* Make adjustments to cluster applier service so that we can pass a mocked single threaded executor for tests
This change adds support for the 'include_type_name' parameter for the
indices.get API. This parameter, which defaults to `false` starting in 7.0,
changes the response to not include the indices type names any longer.
If the parameter is set in the request, we additionally emit a deprecation
warning since using the parameter should be only temporarily necessary while
adapting to the new response format and we will remove it with the next major
version.
This change turns an assertion into an IllegalStateException in SearchPhaseController#getTotalHits.
The goal is to help identify the cause of the failures in https://github.com/elastic/elasticsearch/issues/37179
which seems to fail only in CI.
The assertion will be restored when the issue is solved (NORELEASE).
The test intercepts TransportVerifyShardBeforeCloseAction shard
requests, so it needs a minimum of 2 primary shards on 2 different
nodes to correctly intercepts requests.
These tests failed on CI multiple times in the past weeks because they use a
test cluster with a SUITE scope that recreates nodes between tests. With such
a scope, nodes can be recreated in between test executions and can inherit a
node id from a previous test execution, while they are assigned a random data
path. With the successive node recreations it is possible that a newly recreated
node shares the same node id (but different data path) as a non recreated node.
This commit changes the cluster scope of the CorruptedFileIT and FlushIT
tests which often fail.
The failure is reproducable with :
./gradlew :server:integTest -Dtests.seed=EF3A50C225CF377
-Dtests.class=org.elasticsearch.index.store.CorruptedFileIT
-Dtests.security.manager=true -Dtests.locale=th-TH-u-nu-thai-x-lvariant-TH -Dtests.timezone=America/Rio_Branco
-Dcompiler.java=11 -Druntime.java=8
* [Analysis] Deprecate Standard Html Strip Analyzer
Deprecate only Standard Html Strip Analyzer
If user create index with the analyzer since 7.0, es throws an exception.
If an index was created before 7.0, es issue deprecation log
We will remove it in 8.0
Related #4704
Today we create a global instance of RecoveryResponse then mutate it
when executing each recovery step. This is okay for the current
sequential recovery flow but not suitable for an asynchronous recovery
which we are targeting. With this commit, we return the result of each
step separately, then construct a RecoveryResponse at the end.
Relates #37174
Traditionally remote clusters can be configured dynamically. However,
the compress and ping settings are not currently set to be configured
dynamically. This commit changes that.
This commit implements a straightforward approach to retention lease
expiration. Namely, we inspect which leases are expired when obtaining
the current leases through the replication tracker. At that moment, we
clean the map that persists the retention leases in memory.
This commit converts the epoch time parsing implementation which uses
the java time api to create DateTimeFormatters instead of DateFormatter
implementations. This will allow multi formats for java time to be
implemented in a single DateTimeFormatter in a future change.
Remove several unused helper methods. Most of them are one-liners and should
be easier to be used from the corresponding primitive wrapper classes.
The bytes array conversion methods are unused as well, it should be easy to
re-create them if needed.
This commit fixes an issue with a settings builder method that allows
setting a duration by time unit. In particular, this method can suffer
from a loss of precision. For example, if the input duration is 1500
microseconds then internally we are converting this to "1ms",
demonstrating the loss of precision. Instead, we should internally
convert this to a TimeValue that correctly represents the input
duration, and then convert this to a string using a method that does not
lose the unit. That is what this commit does.
Version needs to be updated after backporting #36997 & #37142
where we added support for providing and serializing localClusterAlias
as well ass absoluteStartMillis.
Relates to #36997 & #37142
Today, a setting can declare that its validity depends on the values of other
related settings. However, the validity of a setting is not always checked
against the correct values of its dependent settings because those settings'
correct values may not be available when the validator runs.
This commit separates the validation of a settings updates into two phases,
with separate methods on the `Setting.Validator` interface. In the first phase
the setting's validity is checked in isolation, and in the second phase it is
checked again against the values of its related settings. Most settings only
use the first phase, and only the few settings with dependencies make use of
the second phase.
The `cluster.unsafe_initial_master_node_count` setting was introduced as a
temporary measure while the design of `cluster.initial_master_nodes` was being
finalised. This commit removes this temporary setting, replacing it with usages
of `cluster.initial_master_nodes` where appropriate.
This commit adds a unique id to cluster blocks, so that they can be uniquely
identified if needed. This is important for the Close Index API where multiple
concurrent closing requests can be executed at the same time. By adding a
UUID to the cluster block, we can generate unique "closing block" that can
later be verified on shards and then checked again from the cluster state
before closing the index. When the verification on shard is done, the closing
block is replaced by the regular INDEX_CLOSED_BLOCK instance.
If something goes wrong, calling the Open Index API will remove the block.
Related to #33888
This commit is the first in a series which will culminate with
fully-functional shard history retention leases.
Shard history retention leases are aimed at preventing shard history
consumers from having to fallback to expensive file copy operations if
shard history is not available from a certain point. These consumers
include following indices in cross-cluster replication, and local shard
recoveries. A future consumer will be the changes API.
Further, index lifecycle management requires coordinating with some of
these consumers otherwise it could remove the source before all
consumers have finished reading all operations. The notion of shard
history retention leases that we are introducing here will also be used
to address this problem.
Shard history retention leases are a property of the replication group
managed under the authority of the primary. A shard history retention
lease is a combination of an identifier, a retaining sequence number, a
timestamp indicating when the lease was acquired or renewed, and a
string indicating the source of the lease. Being leases they have a
limited lifespan that will expire if not renewed. The idea of these
leases is that all operations above the minimum of all retaining
sequence numbers will be retained during merges (which would otherwise
clear away operations that are soft deleted). These leases will be
periodically persisted to Lucene and restored during recovery, and
broadcast to replicas under certain circumstances.
This commit is merely putting the basics in place. This first commit
only introduces the concept and integrates their use with the soft
delete retention policy. We add some tests to demonstrate the basic
management is correct, and that the soft delete policy is correctly
influenced by the existence of any retention leases. We make no effort
in this commit to implement any of the following:
- timestamps
- expiration
- persistence to and recovery from Lucene
- handoff during primary relocation
- sharing retention leases with replicas
- exposing leases in shard-level statistics
- integration with cross-cluster replication
These will occur individually in follow-up commits.
This commit addresses an issue when setting a byte size value setting
using a value that has a fractional component when converted to its
string representation. For example, trying to set a byte size value
setting to a value of 1536 bytes is problematic because internally this
is converted to the string "1.5k". When we go to get this setting, we
try to parse "1.5k" back to a byte size value, which does not support
fractional values. The problem is that internally we are relying on a
method which loses the unit when doing the string conversion. Instead,
we are going to use a method that does not lose the unit and therefore
we can roundtrip from the byte size value to the string and back to the
byte size value.
* Randomly doing non-atomic writes causes rare 0 byte reads from `index-N` files in tests
* Removing this randomness fixes these random failures and is valid because it does not reproduce a real-world failure-mode:
* Cloud-based Blob stores (S3, GCS, and Azure) do not have inconsistent partial reads of a blob, either you read a complete blob or nothing on them
* For file system based blob stores the atomic move we do (to atomically write a file) by setting `java.nio.file.StandardCopyOption#ATOMIC_MOVE` would throw if the file system does not provide for atomic moves
* Closes#37005
I started referring to this parameter name from various places in #37149 so I
think it's a good idea to simplify things by referring to a common constant.
We have recently added support for providing a local cluster alias to a
SearchRequest through a package protected constructor. When executing
cross-cluster search requests with local reduction on each cluster, the
CCS coordinating node will have to provide such cluster alias to each
remote cluster, as well as the absolute start time of the search action
in milliseconds from the time epoch, to be used when evaluating date
math expressions both while executing queries / scripts as well as when
resolving index names.
This commit adds support for providing the start time together with the
cluster alias. It is a final member in the search request, which will
only be set when using cross-cluster search with local reduction (also
known as alternate execution mode). When not provided, the coordinating
node will determine the current time and pass it through (by calling
`System.currentTimeMillis`).
Relates to #32125
This pull request changes the Freeze Index and Close Index actions so
that these actions always requires a Task. The task's id is then propagated
from the Freeze action to the Close action, and then to the Verify shard action.
This way it is possible to track which Freeze task initiates the closing of an index,
and which consecutive verifiy shard are executed for the index closing.
As suggested in #36775, this pull request renames the following methods:
ClusterBlocks.hasGlobalBlock(int)
ClusterBlocks.hasGlobalBlock(RestStatus)
ClusterBlocks.hasGlobalBlock(ClusterBlockLevel)
to something that better reflects the property of the ClusterBlock that is searched for:
ClusterBlocks.hasGlobalBlockWithId(int)
ClusterBlocks.hasGlobalBlockWithStatus(RestStatus)
ClusterBlocks.hasGlobalBlockWithLevel(ClusterBlockLevel)
This commit addresses an issue when setting a time value setting using a
value that has a fractional component when converted to its string
representation. For example, trying to set a time value setting to a
value of 1500ms is problematic because internally this is converted to
the string "1.5s". When we go to get this setting, we try to parse
"1.5s" back to a time value, which does not support fractional
values. The problem is that internally we are relying on a method which
loses the unit when doing the string conversion. Instead, we are going
to use a method that does not lose the unit and therefore we can
roundtrip from the time value to the string and back to the time value.
* The retries on the failing master can lead to concurrently trying to create and delete a snapshot, catch this for now to fix this test
* closes#36779
We don't want two FSDirectories manage pending deletes separately
and optimize file listing. This confuses IndexWriter and causes exceptions
when files are deleted twice but are pending for deletion. This change
move to using a NIOFS subclass that only delegates to MMAP for opening files
all metadata and pending deletes are managed on top.
Closes#37111
Relates to #36668
The query object was incorrectly added to the remote object in the
xcontent. This fix moves the query back into the source, if it was
passed in as part of the RemoteInfo. It also adds a IPv6 test for
reindex from remote such that we can properly validate this.
In Lucene 8 searches can skip non-competitive hits if the total hit count is not requested.
It is also possible to track the number of hits up to a certain threshold. This is a trade off to speed up searches while still being able to know a lower bound of the total hit count. This change adds the ability to set this threshold directly in the track_total_hits search option. A boolean value (true, false) indicates whether the total hit count should be tracked in the response. When set as an integer this option allows to compute a lower bound of the total hits while preserving the ability to skip non-competitive hits when enough matches have been collected.
Relates #33028
Today we block using the generic thread-pool on the target side
until the source side has fully executed the recovery. We still
block on the source side executing the recovery in a blocking fashion
but there is no reason to block on the target side. This will
release generic threads early if there are many concurrent recoveries
happen.
Relates to #36195
Today it's very difficult to see which indices are frozen or rather
throttled via the commonly used monitoring APIs. This change adds
a cell to the `_cat/indices` API to render if an index is `search.throttled`
Relates to #34352
With #36997 we added support for providing a local cluster alias with a
`SearchRequest`. We intended to make sure that when provided as part of
a search request, the cluster alias would never be used for connection
lookups. Yet due to a bug we would still end up looking up the
connection from the remote ones.
This commit adds a test to make sure that whenever we set the cluster
alias to the `SearchRequest` (which can only be done at transport), such
alias is used as index prefix in the returned hits. No errors are thrown
despite no remote clusters are configured indicating that such alias is
never used for connection look-ups.
Also, we add explicit support for the empty cluster alias when printing
out index names through `RemoteClusterAware#buildRemoteIndexName`.
In fact we don't want to print out `:index` when the cluster alias is
set to empty string, but rather `index`. Yet, the semantic of empty
string is different compared to `null` as it will still disable final
reduction. This will be used in CCS when searching against remote
clusters as well as the local one, the local one will have empty prefix
yet it will need to disable final reduction so that its results will be
properly merged with the ones coming from the remote clusters.
Today when electing a master in Zen2 we use the cluster state version to
determine whether a node has a fresh-enough cluster state to become master.
However the cluster state version is not a reliable measure of freshness in the
Zen1 world; furthermore in 6.x the cluster state version is not persisted. This
means that when upgrading from 6.x via a full cluster restart a cluster state
update may be lost if a stale master wins the initial election.
This change fixes this by using the metadata version as a measure of freshness
when in term 0, since this is persisted in 6.x and does more reliably indicate
the freshness of nodes.
It also makes changes parallel to elastic/elasticsearch-formal-models#40 to
support situations in which nodes accept cluster state versions in term 0: this
does not happen in a pure Zen2 cluster, but can happen in mixed clusters and
during upgrades.
Now that we unwrap mappings in DocumentMapperParser#extractMappings, it is not
necessary for the mapping definition to always be nested under the type. This
leniency around the mapping format was added in 2341825358.
* The static threadpool leaks a lot of memory in these tests because it prevents things like the connect listeners from `org.elasticsearch.transport.TcpTransport#initiateConnection`
to be GCed between tests (since they keep being referenced by the threadpool) which in turn reference channels and their underlying buffers
* I could not find any slowdown in executing these tests from this change, if anything they are slightly faster now on my machine
* Relates #36906 (which may be caused by slowness from leaking memory and also becomes testable in a loop by this change)
Types can be used both in the source and dest section of the body which will
be translated to search and index requests respectively. Adding a deprecation warning
for those cases and removing examples using more than one type in reindex since
support for this is going to be removed.
The `composite` aggregation uses a TreeMap to keep track of the best buckets.
This ensures a log(n) time cost to insert new buckets but also to retrieve buckets
that are already present in the map. In order to speed up the retrieval of buckets
this change replaces the TreeMap with a priority queue and a HashMap. The insertion
cost is still log(n) but the retrieval of buckets through the HashMap is now done in constant
time. This optimization can bring significant improvement since each document needs
to check if its associated buckets are already present in the current best buckets.
With this commit we rename `node.store.allow_mmapfs` to
`node.store.allow_mmap`. Previously this setting has controlled whether
`mmapfs` could be used as a store type. With the introduction of
`hybridfs` which also relies on memory-mapping,
`node.store.allow_mmapfs` also applies to `hybridfs` and thus we rename
it in order to convey that it is actually used to allow memory-mapping
but not a specific store type.
Relates #36668
Relates #37070
The QueryStringQueryBuilder does not currently delegate to the field mapper's prefixQuery
method, so does not use indexed prefixes. This commit corrects this.
It also fixes a bug where a query a* would not match the word a if indexed prefixes were used with
a minchar setting of 2.
When executing terms aggregations we set the shard_size, meaning the
number of buckets to collect on each shard, to a value that's higher than
the number of requested buckets, to guarantee some basic level of
precision. We have an optimization in place so that we leave shard_size
set to size whenever we are searching against a single shard, in which
case maximum precision is guaranteed by definition.
Such optimization requires us access to the total number of shards that
the search is executing against. In the context of cross-cluster search,
once we will introduce multiple reduction steps (one per cluster) each
cluster will only know the number of local shards, which is problematic
as we should only optimize if we are searching against a single shard in a
single cluster. It could be that we are searching against one shard per cluster
in which case the current code would optimize number of terms causing
a loss of precision.
While discussing how to address the CCS scenario, we decided that we do
not want to introduce further complexity caused by this single shard
optimization, as it benefits only a minority of cases, especially when
the benefits are not so great.
This commit removes the single shard optimization, meaning that we will
always have heuristic enabled on how many number of buckets to collect
on the shards, even when searching against a single shard.
This will cause more buckets to be collected when searching against a single
shard compared to before. If that becomes a problem for some users, they
can work around that by setting the shard_size equal to the size.
Relates to #32125
With #36221 we introduced shards counting to address a rare failure.
This caused a worse problem in this test when replicas were allocated
and shards failures were randomly returned. The latch has to take into
account additional attempts caused by the shard failures, which means
that in order for run to be called, performPhaseOnShard will be called
(numShards + numFailures) times.
To address this, we need to decide upfront which shard is going to fail,
making sure that at least one shards is successful otherwise the whole
request fails.
Closes#37074
With this commit we introduce a new store type `hybridfs` that is a
hybrid between `mmapfs` and `niofs`. This store type chooses different
strategies to read Lucene files based on the read access pattern (random
or linear) in order to optimize performance.
This store type has been available in earlier versions of Elasticsearch
as `default_fs`. We have chosen a different name now in order to convey
the intent of the store type instead of tying it to the fact whether it
is the default choice.
Relates #36668
SearchAsyncActionTests may fail with RejectedExecutionException as InitialSearchPhase may try to execute a runnable after the test has successfully completed, and the corresponding executor was already shut down. The latch was located in getNextPhase that is almost correct, but does not cover for the last finishAndRunNext round that gets executed after onShardResult is invoked.
This commit moves the latch to count the number of shards, and allowing the test to count down later, after finishAndRunNext has been potentially forked. This way nothing else will be executed once the executor is shut down at the end of the tests.
Closes#36221Closes#33699
* Fixes the issue reproduced in the added tests:
* When having open index requests on a shard that are waiting for a refresh, relocating that shard
becomes blocked until that refresh happens (which could be never as in the test scenario).
With #36997 we added the ability to provide a cluster alias with a
SearchRequest.
The next step is to disable the final reduction whenever a cluster alias
is provided with the SearchRequest. A cluster alias will be provided
when executing a cross-cluster search request with alternate execution
mode, where each cluster does its own reduction locally. In order for
the CCS node to be able to later perform an additional reduction of the
results, we need to make sure that all the needed info stays available.
This means that terms aggregations can be reduced but not pruned, and
pipeline aggs should not be executed. The final reduction will happen
later in the CCS coordinating node.
Relates to #36997 & #32125
With the upcoming cross-cluster search alternate execution mode, the CCS
node will be able to split a CCS request into multiple search requests,
one per remote cluster involved. In order to do that, the CCS node has
to be able to signal to each remote cluster that such sub-requests are
part of a CCS request. Each cluster does not know about the other
clusters involved, and does not know either what alias it is given in
the CCS node, hence the CCS coordinating node needs to be able to provide
the alias as part of the search request so that it is used as index prefix
in the returned search hits.
The cluster alias is a notion that's already supported in the search shards
iterator and search shard target, but it is currently used in CCS as both
index prefix and connection lookup key when fanning out to all the shards.
With CCS alternate execution mode the provided cluster alias needs to be
used only as index prefix, as shards are local to each cluster hence no
cluster alias should be used for connection lookups.
The local cluster alias can be set to the SearchRequest at the transport layer
only, and its constructor/getter methods are package private.
Relates to #32125
Improves on #36449 which did not cover the situation where a node had bumped its term during
the election, and not when receiving the first follower check. This was uncovered while refactoring
NodeJoinTests so that they don't need to access to an internal field of Coordinator anymore (which
can now be made private).
* This speeds up the test from an average 25s down to 7s runtime
* There is no need for artificially slowing down the snapshot to reproduce the issue of an out of sync routing table in practice.
Over hundreds of test runs the test's snapshot shard service still runs in the index not found exception every time reproducing this issue.
* Relates #36294
Keys are compared in BucketSortPipelineAggregation so making key type (ArrayMap) implement Comparable. Maps are compared using the entry set's iterator so ordered maps order is maintain. For each entry first comparing key then value. Assuming all keys are strings. When comparing entries' values if type is not identical and\or type not implementing Comparable, throwing exception. Not implementing equals() and hashCode() functions as parent's ones are sufficient. Tests included.
Today the routing of a SourceToParse is assigned in a separate step
after the object is created. We can easily forget to set the routing.
With this commit, the routing must be provided in the constructor of
SourceToParse.
Relates #36921
We introduce a typeless API in #35790 where we translate the default
docType "_doc" to the user-defined docType. However, we do not rewrite
the SourceToParse with the resolved docType. This leads to a situation
where we have two translog operations for the same document with
different types:
- prvOp [Index{id='9LCpwGcBkJN7eZxaB54L', type='_doc', seqNo=1,
primaryTerm=1, version=1, autoGeneratedIdTimestamp=1545125562123}]
- newOp [Index{id='9LCpwGcBkJN7eZxaB54L', type='not_doc', seqNo=1,
primaryTerm=1, version=1, autoGeneratedIdTimestamp=-1}]
Closes#36769
In this test, we verify that the LocalCheckpointTracker is initialized
with the operations of the safe commit. And the test fails because
Engine#Index does not implement the equals method (should not
implement as it consists of a mutable ParsedDocument).
Closes#36470
This is a follow-up to some discussions around #36399. Currently we have
relatively confusing compression behavior where compression can be
configured for requests based on transport.compress or a specific
setting for a remote cluster. However, we can only compress responses
based on transport.compress as we do not know where a request is
coming from (currently).
This commit modifies the behavior to NEVER compress responses based on
settings. Instead, a response will only be compressed if the request was
compressed. This commit also updates the documentation to more clearly
described transport level compression.
When the script contexts were created in 6, the use of params.ctx was
deprecated. This commit cleans up that code and ensures that params.ctx
is null in both watcher script contexts.
Relates: #34059
This commit adds a RemoteClusterAwareRequest interface that allows a
request to specify which remote node it should be routed to. The remote
cluster aware client will attempt to route the request directly to this
node. Otherwise it will send it as a proxy action to eventually end up
on the requested node.
It implements the ccr clean_session action with this client.
* Adds term number and greppable phrase 'coordinator becoming' to Coordinator
mode changes
* Adds term and version to messages from the ClusterApplier about master
changes
* Reduces some LeaderChecker messages to TRACE level
This commit modifies ESSingleNodeTestCase and ESIntegTestCase and
several concrete test classes to use node names when bootstrapping the
cluster.
Today ClusterBootstrapService.INITIAL_MASTER_NODE_COUNT_SETTING
setting is used to bootstrap clusters in tests. Instead, we want to use
ClusterBootrstapService.INITIAL_MASTER_NODES_SETTING and get rid of
the former setting eventually.
There were two main problems when refactoring InternalTestCluster:
1. Nodes are created one-by-one in buildNode method. And node.name
is created in this method as well. It's not suitable for bootstrapping,
because we need to have the names of all master eligible nodes in
advance, before creating the node with bootstrapping configuration set.
We address this issue by separating buildNode into two methods:
getNodeSettings and buildNode. We first iterate over all nodes to
get nodes settings, then change the setting for the bootstrapping node
and then proceed with building the node.
2. If autoManageMinMasterNodes = false, there is no way for the test to
set the list of bootstrapping nodes because node names are not known in
advance. This problem is solved by adding updateNodesSettings method
to NodeConfigurationSource and ESIntegTestCase (which could be
overridden by concrete integration test class). Once we have the list
of settings for all nodes, the integration test class is allowed to
update it. In our case, we update the
ClusterBootrstapService.INITIAL_MASTER_NODES_SETTING setting.
... MlDistributedFailureIT.testLoseDedicatedMasterNode.
An intermittent failure has been observed in
`MlDistributedFailureIT. testLoseDedicatedMasterNode`.
The test launches a cluster comprised by a dedicated master node
and a data and ML node. It creates a job and datafeed and starts them.
It then shuts down and restarts the master node. Finally, the test asserts
that the two tasks have been reassigned within 10s.
The intermittent failure is due to the assertions that the tasks have been
reassigned failing. Investigating the failure revealed that the `assertBusy`
that performs that assertion times out. Furthermore, it appears that the
job task is not reassigned because the memory tracking info is stale.
Memory tracking info is refreshed asynchronously when a job is attempted
to be reassigned. Tasks are attempted to be reassigned either due to a relevant
cluster state change or periodically. The periodic interval is controlled by a cluster
setting called `cluster.persistent_tasks.allocation.recheck_interval` and defaults to 30s.
What seems to be happening in this test is that if all cluster state changes after the
master node is restarted come through before the async memory info refresh completes,
then the job might take up to 30s until it is attempted to reassigned. Thus the `assertBusy`
times out.
This commit changes the test to reduce the periodic check that reassigns persistent
tasks to `200ms`. If the above theory is correct, this should eradicate those failures.
Closes#36760
Negative timestamps are currently supported in joda time. These are
dates before epoch. However, it doesn't really make sense to have a
negative timestamp, since this is a modern format. Any dates before
epoch can be represented with normal date formats, like ISO8601.
Additionally, implementing negative epoch timestamp parsing in java time
has an edge case which would more than double the code required. This
commit deprecates use of negative epoch timestamps.
Currently the ByteBufferReference does not duplicate the buffer.
This means that any changes to the buffer's limit or position will
impact the reference. This can lead to unexpected behavior. This commit
uses the ByteBuffer#slice method to ensure that the reference retains
its own ByteBuffer.
This commit partially reverts #36447 by using the ability of Joda time's
DateTimeFormatterBuilder to append multiple parsers instead of using the
MergedDateFormatter. The MergedDateFormatter will be removed in a future
change, as it is not as performant due to creating potentially many
exceptions during heavy date parsing. This change is a stop-gap until
that followup is ready.
closes#36602
The test testClusterInfoServiceInformationClearOnError relies on timing behavior. It sets
InternalClusterInfoService.INTERNAL_CLUSTER_INFO_TIMEOUT_SETTING to 1s and relies on the
fact that the stats request completes within that timeframe (which our ever-so-slow CI seems to
violate at times). Unfortunately the logging has been misimplemented in InternalClusterInfoService,
so the corresponding log messages showing that the requests have timed out are missing for this.
The issue can be locally reproduced by reducing the timeout to something lower.
Closes#36554
If the master sends both its follower checks just before disconnection then
neither will receive a response, meaning it must wait for the checks to time
out and send another in order to detect the disconnection and stand down.
Closes#36788
Single-node discovery is not persisting cluster states, which was caused by a recent 7.0-only
refactoring. This commit ensures that the cluster state is properly persisted when using single-node
discovery and adds a corresponding test.
* Replace 0L with an UNASSIGNED_PRIMARY_TERM constant
0 is an illegal value for a primary term that is often used to indicate
the primary term isn't assigned or is irrelevant. This PR replaces the
usage of 0 with a constant, to improve readability and so it can be
tracked and if needed, replaced.
* feedback
The default index_prefix settings will index prefixes of between 2 and 5 characters in length.
Currently, if a prefix search falls outside of this range at either end we fall back to a standard prefix
expansion, which is still very expensive for single character prefixes. However, we have an option
here to use a wildcard expansion rather than a prefix expansion, so that a query of a* gets remapped
to a? against the _index_prefix field - likely to be a very small set of terms, and certain to be much
smaller than a* against the whole index.
This commit adds this extra level of mapping for any prefix term whose length is one less than
the min_chars parameter of the index_prefixes field.
Today we might corrupt a fully deleted segment which is then pruned
once a snapshot is taken. This causes random test failures in CorruptedFileIT.
This change hardens the selection of files to corrupt and removes some fragile
code preventing fully deleted segments to be taken into account.
Closes#36526
Commit #36786 updated docs and strings to reference transport.port instead of
transport.tcp.port. However, this breaks backwards compatibility tests
as the tests rely on string configurations and transport.port does not
exist prior to 6.6. This commit reverts the places were we reference
transport.tcp.port for tests. This work will need to be reintroduced in
a backwards compatible way.
Lucene 7.6 uses a smaller encoding for LatLonShape. This commit forks the LatLonShape classes to Elasticsearch's local lucene package. These classes will be removed on the release of Lucene 7.6.
This is related to #36652. In 7.0 we plan to deprecate a number of
settings that make reference to the concept of a tcp transport. We
mostly just have a single transport type now (based on tcp). Settings
should only reference tcp if they are referring to socket options. This
commit updates the settings in the docs. And removes string usages of
the old settings. Additionally it adds a missing remote compress setting
to the docs.
TransportWriteAction.WriteReplicaResult is not properly synchronized, which can lead to a data race
between the thread that calls respond and the AsyncAfterWriteAction that calls either onSuccess or
onFailure. This data race results in the response listener not being called, which ultimately results in
a stuck replication task on the replica.
This commit is related to #36127. It adds a CcrRestoreSourceService to
track Engine.IndexCommitRef need for in-process file restores. When a
follower starts restoring a shard through the CcrRepository it opens a
session with the leader through the PutCcrRestoreSessionAction. The
leader responds to the request by telling the follower what files it
needs to fetch for a restore. This is not yet implemented.
Once, the restore is complete, the follower closes the session with the
DeleteCcrRestoreSessionAction action.
This commit removes the originalSettings member from Node. It was only
needed to allows test clusters to recreate the node in certain
situations. Instead, the test cluster now keeps track of these settings.
The joda epoch parsing code currently supports passing epoch time as a
number in scientific notation. However, no systems appear to exist which
output timestamps in scientific notation. In java time, it is
particularly complex to implement scientific notation timestamp parsing
within a DateTimeFormatter. This commit adds a deprecation warning when
the epoch time parsers in joda parse scientific notation, so that it can
be removed when switching to java time.
joda are passed a time in scientific notation.
* [Geo] Expose BKDBackedGeoShapes as new VECTOR strategy
This commit exposes lucene's LatLonShape field as a new
strategy in GeoShapeFieldMapper. To use the new indexing
approach, strategy should be set to "vector" in the
geo_shape field mapper. If the tree parameter is set
the mapper will throw an IAE. Note the following:
When using vector strategy:
* geo_shape query does not support querying by POINT,
MULTIPOINT, or GEOMETRYCOLLECTION.
* LINESTRING and MULTILINESTRING queries do not support
WITHIN relation.
* CONTAINS relation is not supported.
* The tree, precision, tree_levels, distance_error_pct,
and points_only parameters will not throw an exception
but they have no effect and will be marked as
deprecated..
All other features are supported.
* revert change to PercolatorFieldMapper
* fix ExistsQuery for geo_shape vector strategy
* add deprecation logging for tree, precision, tree_levels, distance_error_pct, and points_only
* initial update to geoshape docs, including mapping migration updates
* initial support for GeoCollection queries
* fix docs and javadoc errors
* clean up geocollection queries
* set deprecated mapping tests to NOTCONSOLE
* fix geo-shape mapper asciidoc mapping and test warnings
* add support for point queries using LatLonShapeBoundingBoxQuery
* update GeoShapeQueryBuilderTests to include POINT queries for VECTOR strategy. Other comment cleanups
* add lucene geometry build testing to ShapeBuilder tests
* remove deprecated prefix tree mapping from geo-shape.asciidoc
* refactor GeoShapeFieldMapper into LegacyGeoShapeFieldMapper and GeoShapeFieldMapper
Both classes derive from BaseGeoShapeFieldMapper that provides shared parameters:
coerce, ignoreMalformed, ignore_z_value, orientation.
* update docs to remove vector strategy
* fix GeometryCollectionBuilder#buildLucene to return the object created by the shape builder
* fix LineLength failure in GeoJsonShapeParserTests
* ShapeMapper refactor changes from PR feedback
* fix typo in geo-shape.asciidoc
* ignore circle test in docs
* update indexing-approach ref to geoshape-indexing-approach
* add warnings check for LegacyGeoShapeFieldMapper to AbstractBuilderTestCase
* fix deprecatedParameters setup
* update indexing approach
* fixing unexpected warnings failures
* move orientation back to field type
* remove if in LegacyGeoShapeFieldMapper#doXContent. Fix GeoShapeFieldMapper to work with double array as a point
* fix indexing-approach link in circle section of geoshape docs
* add strategy to deprecation warnings check
* fix test failures
* fix typo in QueryStringQueryBuilderTests
* fix total hits to totalHits().value
* fix version number
* add version check to BaseGeoShapeFieldMapper
* fix line length!
* revert version check in BaseGeoShapeFieldMapper
* Fix serialization of mappings of legacy shapes.
* Deprecate types in index API
- deprecate type-based constructors of IndexRequest
- update tests to use typeless IndexRequest constructors
- no yaml tests as they have been already added in #35790
Relates to #35190
MapperService#getAllMetaFields returns an array, which is created out of
an `ObjectHashSet`. Such set does not guarantee deterministic hash
ordering. The array returned by its toArray may be sorted differently
at each run. This caused some repeatability issues in our tests (see #29080)
as we pick random fields from the array of possible metadata fields,
but that won't be repeatable if the input array is sorted differently at
every run. Once setting the tests seed, hppc picks that up and the sorting is
deterministic, but failures don't repeat with the seed that gets printed out
originally (as a seed was not originally set).
See also https://issues.carrot2.org/projects/HPPC/issues/HPPC-173.
With this commit, we simply create a static sorted array that is used for
`getAllMetaFields`. The change is in production code but really affects
only testing as the only production usage of this method was to iterate
through all values when parsing fields in the high-level REST client code.
Anyways, this seems like a good change as returning an array would imply
that it's deterministically sorted.
In order for CCS alternate execution mode (see #32125) to be able to do the final reduction step on the CCS coordinating node, we need to serialize additional info in the transport layer as part of each `SearchHit`. Sort values are already present but they are formatted according to the provided `DocValueFormat` provided. The CCS node needs to be able to reconstruct the lucene `FieldDoc` to include in the `TopFieldDocs` and `CollapseTopFieldDocs` which will feed the `mergeTopDocs` method used to reduce multiple search responses (one per cluster) into one.
This commit adds such information to the `SearchSortValues` and exposes it through a new getter method added to `SearchHit` for retrieval. This info is only serialized at transport and never printed out at REST.
This change adds a new untyped endpoint `{index}/_source/{id}` for both the
GET and the HEAD methods to get the source of a document or check for its
existance. It also adds deprecation warnings to RestGetSourceAction that emit
a warning when the old deprecated "type" parameter is still used. Also updating
documentation and tests where appropriate.
Relates to #35190
This commit adds support to enable bulk upserts to use an index's
default pipeline. Bulk upsert, doc_as_upsert, and script_as_upsert
are all supported.
However, bulk script_as_upsert has slightly surprising behavior since
the pipeline is executed _before_ the script is evaluated. This means
that the pipeline only has access the data found in the upsert field
of the script_as_upsert. The non-bulk script_as_upsert (existing behavior)
runs the pipeline _after_ the script is executed. This commit
does _not_ attempt to consolidate the bulk and non-bulk behavior for
script_as_upsert.
This commit also adds additional testing for the non-bulk behavior,
which remains unchanged with this commit.
fixes#36219
This commit exposes lucene's LatLonShape field as the
default type in GeoShapeFieldMapper. To use the new
indexing approach, simply set "type" : "geo_shape" in
the mappings without setting any of the strategy, precision,
tree_levels, or distance_error_pct parameters. Note the
following when using the new indexing approach:
* geo_shape query does not support querying by
MULTIPOINT.
* LINESTRING and MULTILINESTRING queries do not
yet support WITHIN relation.
* CONTAINS relation is not yet supported.
The tree, precision, tree_levels, distance_error_pct,
and points_only parameters are deprecated.
The remote connection info API leads to resolving addresses of seed
nodes when invoked. This is problematic because if a hostname fails to
resolve, we would not display any remote connection info. Yet, a
hostname not resolving can happen across remote clusters, especially in
the modern world of cloud services with dynamically chaning
IPs. Instead, the remote connection info API should be providing the
configured seed nodes. This commit changes the remote connection info to
display the configured seed nodes, avoiding a hostname resolution. Note
that care was taken to preserve backwards compatibility with previous
versions that expect the remote connection info to serialize a transport
address instead of a string representing the hostname.
This commit adds the last sequence number and primary term of the last operation that have
modified a document to `GetResult` and uses it to power the Update API.
Relates #36148
Relates #10708
Investigating #36556 was made a little trickier because the feedback from the
failing assertion wasn't very informative, and the messages attached to other
nearby assertions were misleading. This commit improves the feedback from these
assertions and tidies up a few other issues in the test suite.
* If removing half the nodes completely removes a shard from the cluster we can't count it in the assertion
* Also:
* Remove unused logger parameter
* Fix typo in var name
* Closes#35365
This commit add support for using sequence numbers to power [optimistic concurrency control](http://en.wikipedia.org/wiki/Optimistic_concurrency_control)
in the delete and index transport actions and requests. A follow up will come with adding sequence
numbers to the update and get results.
Relates #36148
Relates #10708
This commit updates our transport settings for 7.0. It generally takes a
few approaches. First, for normal transport settings, it usestransport.
instead of transport.tcp. Second, it uses transport.tcp, http.tcp,
or network.tcp for all settings that are proxies for OS level socket
settings. Third, it marks the network.tcp.connect_timeout setting for
removal. Network service level settings are only settings that apply to
both the http and transport modules. There is no connect timeout in
http. Fourth, it moves all the transport settings to a single class
TransportSettings similar to the HttpTransportSettings class.
This commit does not actually remove any settings. It just adds the new
renamed settings and adds todos for settings that will be deprecated.
The existing joda compat methods isEquals isAfter and isBefore all took
in a ZonedDateTime, but since all of the scripting is now using the new
JodaCompatZonedDateTime, these are changed to take that in instead.
The following updates were made:
* Add deprecation warnings to `RestUpdateAction`, plus a test in `RestUpdateActionTests`.
* Deprecate relevant methods on the Java HLRC requests/ responses.
* Add HLRC integration tests for the typed APIs.
* Update documentation (for both the REST API and Java HLRC).
* Fix failing integration tests.
Because of an earlier PR, the REST yml tests were already updated (one version without types, and another legacy version that retains types).
SearchSortValuesTests extends now `AbstractSerializingTestCase` which removes some code duplication and standardizes the way we test `fromXContent`, serialization and equals/hashcode.
Also, we were never creating `SearchSortValues` through their public constructor that accept an array of `DocValueFormat` together with the array of raw sort values. That is covered now, which involved some conversion from `BytesRef` to String in the test.
Also, the previous test was not using doing any equality check against the original and parsed versions in `testFromXContent` due to values being parsed with different types in some cases, which is now covered by converting those values using a new method added to `RandomObjects`. The code was already there as part of `randomStoredFieldValues`, but it is now exposed to be used in other scenarios.
For cross cluster search alternate execution mode (see #32125), we will need to take a search request that spans across multiple clusters (based on index prefixes e.g. cluster1:index, cluster2:index etc.) and split it into multiple search requests to be sent to each cluster. A copy constructor added to `SearchRequest` would make that easy and well maintainable in the future.
Something along the same lines already happens in `BulkByScrollParallelizationHelper`, but the corresponding code went outdated as some new fields were added to `SearchRequest` which were not added to the bulk by scroll code. A copy constructor helps making the task of copying a search request maintainable over time.
* Add IntervalQueryBuilder with support for match and combine intervals
* Add relative intervals
* feedback
* YAML test - broekn
* yaml test; begin to add block source
* Add block; make disjunction its own source
* WIP
* Extract IntervalBuilder and add tests for it
* Fix eq/hashcode in Disjunction
* New yaml test
* checkstyle
* license headers
* test fix
* YAML format
* YAML formatting again
* yaml tests; javadoc
* Add OR test -> requires fix from LUCENE-8586
* Add docs
* Re-do API
* Clint's API
* Delete bash script
* doc fixes
* imports
* docs
* test fix
* feedback
* comma
* docs fixes
* Tidy up doc references to old rule
Today we assert that the fake node ID is greater than the real node's ID. In
fact we want to assert that it's greater than _all_ proper UUIDs. This adds
assertions to that effect.
* Adds deprecation logging to ScriptDocValues#getValues.
First commit addressing issue #22919.
`ScriptDocValues#getValues` was added for backwards compatibility but no
longer needed. Scripts using the syntax `doc['foo'].values` when
`doc['foo']` is a list should be using `doc['foo']` instead.
* Fixes two build errors in #34279
* Removes unused import in ScriptDocValuesDatesTest
* Removes used of `.values` in example in diversified-sampler-aggregation.asciidoc
* Removes use of .values from painless test.
Part of #34279
* Updates tests to use `doc[foo]` syntax rather than `doc[foo].values`.
* Removes use of `getValues()` and replaces use of `doc[foo].values` with `doc[foo]`.
* Indentation fix.
* Remove unnecessary list construction at previous `getValues()` callsite in ScriptDocValues.GeoPoints.
* Update migration doc and add link to `getValue` in ScriptDocValues javadoc.
* Fix compile
* Fix javadoc issue
* Removes ScriptDocValues#getValues usage from painless whitelist.
Today, ResyncTask.Status is not registered, but appears as a task status
sometimes, leading to `Failed to deserialize response from handler` exceptions:
java.lang.IllegalArgumentException: Unknown NamedWriteable [org.elasticsearch.tasks.Task$Status][resync]
This commit adds the missing registration.
ConcurrentHashMap does not always behave correctly if removing elements and
concurrently checking for its emptyiness. Work around this by protecting all
usages with a mutex (there was only one usage unprotected by the mutex anyway)
and then we don't even need a ConcurrentHashMap at all.
In order for CCS alternate execution mode (see #32125) to be able to do the final reduction step on the CCS coordinating node, we need to serialize additional info in the transport layer as part of the `SearchHits`, specifically:
- lucene `SortField[]` which contains info about the fields that sorting was performed on and their type, which depends on mappings (that the CCS node does not know about)
- collapse field (`String`) that field collapsing was executed on, if requested
- collapse values (`Object[]`) that field collapsing was based on, if requested
This info is needed to be able to reconstruct the `TopFieldDocs` or `CollapseFieldTopDocs` in the CCS coordinating node to feed the `mergeTopDocs` method and reduce multiple search responses received (one per cluster) into one.
This commit adds such information to the `SearchHits` class. It's nullable info that is not serialized through the REST layer. `SearchPhaseController` sets such info at the end of the hits reduction phase.
* Enable parallel restore operations
* Add uuid to restore in progress entries to uniquely identify them
* Adjust restore in progress entries to be a map in cluster state
* Added tests for:
* Parallel restore from two different snapshots
* Parallel restore from a single snapshot to different indices to test uuid identifiers are correctly used by `RestoreService` and routing allocator
* Parallel restore with waiting for completion to test transport actions correctly use uuid identifiers
In this test, we keep track of a list of index commits then verify that
we reload exactly every operation from the safe commit. If a background
merge is triggered, then we might have a new index commit which is not
recorded in the tracking list. This change disables merges in the test.
Closes#36470
A previous fix of a similar problem in #35201 wasn't general enough, we also
need to catch cases where the randomly generated query string starts with some
version of "now" and hits a date field.
Closes#36595
This commit adds deprecation warnings when using format specifiers with
joda data formats that will change with java time. It also adds the "8"
prefix which may be used to force the new java time format parsing.
The getters and setters for useDisMax() have been deprecated since at least 6.0,
also there hasn't been any reference to the query parameter in the
documentation. Removing it from the builder and tests and replacing it with
`tieBreaker(1.0f)` where necessary.
The commit changes how indices are closed in the MetaDataIndexStateService.
It now uses a 3 steps process where writes are blocked on indices to be closed,
then some verifications are done on shards using the TransportVerifyShardBeforeCloseAction
added in #36249, and finally indices states are moved to CLOSE and their routing
tables removed.
The closing process also takes care of using the pre-7.0 way to close indices if the
cluster contains mixed version of nodes and a node does not support the TransportVerifyShardBeforeCloseAction. It also closes unassigned indices.
Related to #33888
This commit turns MultiValuesSourceFieldConfig into a proper
ToXContentObject for easy testing and verification of its
to/from XContent methods.
Closes#36474.
We are attempting to replace the usage of the `MockTcpTransport` with
the `MockNioTransport`. This commit replaces usages of
`MockTcpTransport` in two zen test cases.
When a security manager is present, the JVM will cache positive hostname
lookups indefinitely. This can be problematic, especially in the modern
world with cloud services where DNS addresses can change, or
environments using Docker containers where IP addresses could be
considered ephemeral. This behavior impacts cluster discovery,
cross-cluster replication and cross-cluster search, reindex from remote,
snapshot repositories, webhooks in Watcher, external authentication
mechanisms, and the Elastic Stack Monitoring Service. The experience of
watching a DNS lookup change yet not be reflected within Elasticsearch
is a poor experience for users. The reason the JVM has this is guard
against DNS cache posioning attacks. Yet, there is already a defense in
the modern world against such attacks: TLS. With proper certificate
validation, even if a resolver falls prey to a DNS cache poisoning
attack, using TLS would neuter the attack. Therefore we have a policy
with dubious security value that significantly impacts usability. As
such we make the usability/security tradeoff towards usability, since
the security risks are very low. This commit introduces new system
properties that Elasticsearch observes to override the JVM DNS cache
policy.
Previously persistent task assignment was checked in the
following situations:
- Persistent tasks are changed
- A node joins or leaves the cluster
- The routing table is changed
- Custom metadata in the cluster state is changed
- A new master node is elected
However, there could be situations when a persistent
task that could not be assigned to a node could become
assignable due to some other change, such as memory
usage on the nodes.
This change adds a timed recheck of persistent task
assignment to account for such situations. The timer
is suspended while checks triggered by cluster state
changes are in-flight to avoid adding burden to an
already busy cluster.
Closes#35792
* We should compare the target value with the to be applied value before interpreting the update as a change
* This speeds up the test failing in #36496 considerably by preventing state updates on noop setting updates
This commit add support to engine operations for resolving and verifying the sequence number and
primary term of the last modification to a document before performing an operation. This is
infrastructure to move our (optimistic concurrency control)[http://en.wikipedia.org/wiki/Optimistic_concurrency_control] API to use sequence numbers instead of internal versioning.
Relates #36148
Relates #10708
There are certain BootstrapCheck checks that may need access environment-specific
values. Watcher's EncryptSensitiveDataBootstrapCheck passes in the node's environment
via a constructor to bypass the shortcoming in BootstrapContext. This commit
pulls in the node's environment into BootstrapContext.
Another case is found in #36519, where it is useful to check the state of the
data-path. Since PathUtils.get and Paths.get are forbidden APIs, we rely on
the environment to retrieve references to things like node data paths.
This means that the BootstrapContext will have the same Settings used in the
Environment, which currently differs from the Node's settings.
data files under the cluster name subdirectory has been deprecated and was
meant to be removed in 6.0. This commit removes some leftover referrences to
these paths.
Includes the following:
* Reversion of doc-values changes in LUCENE-8374; we are interested in seeing if this
has an effect on benchmarks for node-stats and index-stats
* More improvements to docvalues updates
* Make sure to test conversion for both typed and typeless HLRC requests.
* Update a few more statements to deprecatedAndMaybeLog.
* Make sure Rest*SearchTemplateActionTests extend RestActionTestCase.
Currently TransportRequestOptions allows specific requests to request
compression. This commit removes this and always compresses based on the
settings. Additionally, it removes TransportResponseOptions as they
are unused.
This closes#36399.
* Fix CorruptFileIT to also take last DV generation into account
We currently only prune old .liv generations. With soft_deletes it's important
to also prune DV generations.
* Fix CorruptionUtils to skip the footer bytes after the checksum is read.
Today we read a broken checksum since we also checksum the 8 footer bytes that include
the checksum algorithm and the footer magic.
Closes#36526
Currently, the `BytesStreamOutput` always zeros out the underlying byte
pages when they are acquired. This should not be necessary as the stream
overwrites the underlying bytes as serialization occurs.
`PageCacheRecycler` is the class that creates and holds pages of arrays
for various uses. `BigArrays` is just one user of these pages. This
commit moves the constants that define the page sizes for the recycler
to be on the recycler class.
This commit moves the node field in the JoinRequest object to be a
private field, adding a dedicated accessor. This is a minor breaking
change in that it is no longer possible for all callers to overwrite
this field, but that is a feature.
This commit moves the MergedDateFormatter to a package private class and
reworks joda DateFormatter instances to use that instead of a single
DateTimeFormatter with multiple parsers. This will allow the java and
joda multi formats to share the same format parsing method in a
followup.
Currently our handshake requests do not include a version. This is
unfortunate as we cannot rely on the stream version since it is not the
sending node's version. Instead it is the minimum compatibility version.
The handshake request is currently empty and we do nothing with it. This
should allow us to add data to the request without breaking backwards
compatibility.
This commit adds the version to the handshake request. Additionally, it
allows "future data" to be added to the request. This allows nodes to craft
a version compatible response. And will properly handle additional data in
future handshake requests. The proper handling of "future data" is useful
as this is the only request where we do not know the other node's version.
Finally, it renames the TcpTransportHandshaker to
TransportHandshaker.
This commit contains a few minor changes to our search code:
- adjust the visibility of a couple of methods in our search code to package private from public or protected.
- make some of the `SearchPhaseController` methods static where possible
- rename one of the `SearchPhaseController#reducedQueryPhase` methods (used only for scroll requests) to `reducedScrollQueryPhase` without the `isScrollRequest` argument which was always set to `true`
- replace leniency in `SearchPhaseController#setShardIndex` with an assert to make sure that we never set the shard index twice
- remove two null checks where the checked field can never be null
- resolve an unchecked warning
- replace `List#toArray` invocation that creates an array providing the true size with array creation of length 0
- correct a couple of typos in comments
The different `DocValueFormat` implementors throw `UnsupportedOperationException` for methods that they don't support. That is perfectly fine, and quite common as not all implementors support all of the possible formats. This makes it hard though to trace back which implementors support which formats as they all implement the same methods.
This commit introduces default methods in the `DocValueFormat` interface so that all methods throw `UnsupportedOperationException` by default. This way implementors can override only the methods that they specifically support.
This commit modifies BigArrays to take a circuit breaker name and
the circuit breaking service. The default instance of BigArrays that
is passed around everywhere always uses the request breaker. At the
network level, we want to be using the inflight request breaker. So this
change will allow that.
Additionally, as this change moves away from a single instance of
BigArrays, the class is modified to not be a Releasable anymore.
Releasing big arrays was always dispatching to the PageCacheRecycler,
so this change makes the PageCacheRecycler the class that needs to be
managed and torn-down.
Finally, this commit closes#31435 be making the serialization of
transport messages use the inflight request breaker. With this change,
we no longer push the global BigArrays instnace to the network level.
Today the Zen2 coordinator only applies a write block when there is no known
elected master, ignoring the `discovery.zen.no_master_block` setting. This
commit resolves this, applying the correct block according to the configuration
instead.
Today we do not enforce soft-deletes when accessing the Lucene changes
snapshot. This might lead to internal errors because we assume
soft-deletes are enabled in that code path.
1. CCR tests work without any changes
2. `testDanglingIndices` require changes the source code (added TODO).
3. `testIndexDeletionWhenNodeRejoins` because it's using just two
nodes, adding the node to exclusions is needed on restart.
4. `testCorruptTranslogTruncationOfReplica` starts dedicated master
one, because otherwise, the cluster does not form, if nodes are stopped
and one node is started back.
5. `testResolvePath` needs TEST cluster, because all nodes are stopped
at the end of the test and it's not possible to perform checks needed
by SUITE cluster.
6. `SnapshotDisruptionIT`. Without changes, the test fails because Zen2
retries snapshot creation as soon as network partition heals. This
results into the race between creating snapshot and test cleanup logic
(deleting index). Zen1 on the
other hand, also schedules retry, but it takes some time after network
partition heals, so cleanup logic executes latter and test passes. The
check that snapshot is eventually created is added to
the end of the test.
Today we log a slightly cryptic "cluster bootstrapping is disabled on this
node" message if bootstrapping hasn't been configured. Since there is today
only one way to bootstrap the cluster it seems preferable to spell out exactly
which setting is missing.
* Lower fielddata circuit breaker default limit
Lower fielddata circuit breaker default limit from 60% to 40% as we have
moved to doc_values for most of the cases.
* merge master in
* update tests
* update docs
Deals with a situation where a follower becomes disconnected from the leader, but only for such a
short time where it becomes candidate and puts up a NO_MASTER_BLOCK, but then receives a
follower check from the leader. If the leader does not notice the node disconnecting, it is important
for the node not to be turned back into a follower but try and join the leader again.
We still should prefer the node into a follower on a follower check when this follower check triggers
a term bump as this can help during a leader election to quickly have a leader turn all other nodes
into followers, even before the leader has had the chance to transfer a possibly very large cluster
state.
Closes#36428
The following updates were made:
- Add a new untyped endpoint `{index}/_explain/{id}`.
- Add deprecation warnings to Rest*Action, plus tests in Rest*ActionTests.
- For each REST yml test, make sure there is one version without types, and another legacy version that retains types (called *_with_types.yml).
- Deprecate relevant methods on the Java HLRC requests/ responses.
- Update documentation (for both the REST API and Java HLRC).
This commit converts the watcher execution context to use the joda
compat java time objects. It also again removes the joda methods from
the painless whitelist.
For each API, the following updates were made:
- Add deprecation warnings to `Rest*Action`, plus tests in `Rest*ActionTests`.
- For each REST yml test, make sure there is one version without types, and another legacy version that retains types (called *_with_types.yml).
- Deprecate relevant methods on the Java HLRC requests/ responses.
- Update documentation (for both the REST API and Java HLRC).
Allow `auto_date_histogram` as a valid parent agg for derivative,
cumulative sum, moving average, moving function and serial
differencing pipeline aggregations.
Since all these aggs share the same requirement (sequentially
ordered parent aggs), this commit also refactors to share
the same validation code so that any newly added aggs won't
be forgotten.
Closes#35578
This change adds a way to provide the content type of the rest serialization
tests when creating random instances. This is used by SearchHitsTests to ensure
that the internal members of the class are created with the same xContentType
and that equals can be used to compare an instances created from an XContent
view.
Today the `GetDiscoveredNodesAction` waits, possibly indefinitely, to discover
enough nodes to bootstrap the cluster. However it is possible that the cluster
forms before a node has discovered the expected collection of nodes, in which
case the action will wait indefinitely despite the fact that it is no longer
required.
This commit changes the behaviour so that the action fails once a node receives
a cluster state with a nonempty configuration, indicating that the cluster has
been successfully bootstrapped and therefore the `GetDiscoveredNodesAction`
need wait no longer.
Relates #36380 and #36381; reverts 558f4ec278.