Adds join validation to Zen2, which prevents a node from joining a cluster when the node does not
have the right ES version or does not satisfy any other of the join validation constraints.
This SearchType was deprecated since at least 6.0 and according to the
documentation is only kept around for pre-5.3 requests. Removing and leaving a
comment as placeholder so we don't reuse the byte value associated with it
without further consideration.
* Java Time: Fix timezone parsing
An independent test uncovered an issue when parsing a timezone
containing a colon like `01:00` - some formats did not properly support
this.
This commit adds test for all formats in the dueling tests and fixes a
few issues with existing date formatters.
* fix tests, so they run under java8
* Tests: Add ElasticsearchAssertions.awaitLatch method
Some tests are using assertTrue(latch.await(...)) in their code. This
leads to an assertion error without any error message. This adds a
method which has a nicer error message and can be used in tests.
* fix forbidden apis
* fix spaces
Today we still wrap recovery source readers on merge even if we
keep all documents recovery source. This basically disables bulk
merging for stored fields. This change skips wrapping if all docs
sources are kept anyway.
This change fixes an unreleased bug that trips an assertion because a static instance
shared among threads is modified during the search. This commit copies the static
instance in order to ensure that each thread can modify the value without modifying
the other instances.
Closes#37179Closes#37266
* ingest: compile mustache template only if field includes '{{''
Prior to this change, any field in an ingest node processor that supports
script templates would be compiled as mustache template regardless if they
contain a template or not. Compiling normal text as mustache templates is
harmless. However, each compilation counts against the script compilation
circuit breaker. A large number of processors without any templates or scripts
could un-intuitively trip the too many script compilations circuit breaker.
This change simple checks for '{{' in the text before it attempts to compile.
fixes#37120
Update BucketSortPipelineAggregator to use a List and Collections.sort() for sorting instead of a priority queue. This preserves the order for equal values. Closes#36322.
* TESTS: Real Coordinator in SnapshotServiceTests
* Introduce real coordinator in SnapshotServiceTests to be able to test network disruptions realistically
* Make adjustments to cluster applier service so that we can pass a mocked single threaded executor for tests
This change adds support for the 'include_type_name' parameter for the
indices.get API. This parameter, which defaults to `false` starting in 7.0,
changes the response to not include the indices type names any longer.
If the parameter is set in the request, we additionally emit a deprecation
warning since using the parameter should be only temporarily necessary while
adapting to the new response format and we will remove it with the next major
version.
This change turns an assertion into an IllegalStateException in SearchPhaseController#getTotalHits.
The goal is to help identify the cause of the failures in https://github.com/elastic/elasticsearch/issues/37179
which seems to fail only in CI.
The assertion will be restored when the issue is solved (NORELEASE).
The test intercepts TransportVerifyShardBeforeCloseAction shard
requests, so it needs a minimum of 2 primary shards on 2 different
nodes to correctly intercepts requests.
These tests failed on CI multiple times in the past weeks because they use a
test cluster with a SUITE scope that recreates nodes between tests. With such
a scope, nodes can be recreated in between test executions and can inherit a
node id from a previous test execution, while they are assigned a random data
path. With the successive node recreations it is possible that a newly recreated
node shares the same node id (but different data path) as a non recreated node.
This commit changes the cluster scope of the CorruptedFileIT and FlushIT
tests which often fail.
The failure is reproducable with :
./gradlew :server:integTest -Dtests.seed=EF3A50C225CF377
-Dtests.class=org.elasticsearch.index.store.CorruptedFileIT
-Dtests.security.manager=true -Dtests.locale=th-TH-u-nu-thai-x-lvariant-TH -Dtests.timezone=America/Rio_Branco
-Dcompiler.java=11 -Druntime.java=8
* [Analysis] Deprecate Standard Html Strip Analyzer
Deprecate only Standard Html Strip Analyzer
If user create index with the analyzer since 7.0, es throws an exception.
If an index was created before 7.0, es issue deprecation log
We will remove it in 8.0
Related #4704
Today we create a global instance of RecoveryResponse then mutate it
when executing each recovery step. This is okay for the current
sequential recovery flow but not suitable for an asynchronous recovery
which we are targeting. With this commit, we return the result of each
step separately, then construct a RecoveryResponse at the end.
Relates #37174
Traditionally remote clusters can be configured dynamically. However,
the compress and ping settings are not currently set to be configured
dynamically. This commit changes that.
This commit implements a straightforward approach to retention lease
expiration. Namely, we inspect which leases are expired when obtaining
the current leases through the replication tracker. At that moment, we
clean the map that persists the retention leases in memory.
This commit converts the epoch time parsing implementation which uses
the java time api to create DateTimeFormatters instead of DateFormatter
implementations. This will allow multi formats for java time to be
implemented in a single DateTimeFormatter in a future change.
Remove several unused helper methods. Most of them are one-liners and should
be easier to be used from the corresponding primitive wrapper classes.
The bytes array conversion methods are unused as well, it should be easy to
re-create them if needed.
This commit fixes an issue with a settings builder method that allows
setting a duration by time unit. In particular, this method can suffer
from a loss of precision. For example, if the input duration is 1500
microseconds then internally we are converting this to "1ms",
demonstrating the loss of precision. Instead, we should internally
convert this to a TimeValue that correctly represents the input
duration, and then convert this to a string using a method that does not
lose the unit. That is what this commit does.
Version needs to be updated after backporting #36997 & #37142
where we added support for providing and serializing localClusterAlias
as well ass absoluteStartMillis.
Relates to #36997 & #37142
Today, a setting can declare that its validity depends on the values of other
related settings. However, the validity of a setting is not always checked
against the correct values of its dependent settings because those settings'
correct values may not be available when the validator runs.
This commit separates the validation of a settings updates into two phases,
with separate methods on the `Setting.Validator` interface. In the first phase
the setting's validity is checked in isolation, and in the second phase it is
checked again against the values of its related settings. Most settings only
use the first phase, and only the few settings with dependencies make use of
the second phase.
The `cluster.unsafe_initial_master_node_count` setting was introduced as a
temporary measure while the design of `cluster.initial_master_nodes` was being
finalised. This commit removes this temporary setting, replacing it with usages
of `cluster.initial_master_nodes` where appropriate.
This commit adds a unique id to cluster blocks, so that they can be uniquely
identified if needed. This is important for the Close Index API where multiple
concurrent closing requests can be executed at the same time. By adding a
UUID to the cluster block, we can generate unique "closing block" that can
later be verified on shards and then checked again from the cluster state
before closing the index. When the verification on shard is done, the closing
block is replaced by the regular INDEX_CLOSED_BLOCK instance.
If something goes wrong, calling the Open Index API will remove the block.
Related to #33888
This commit is the first in a series which will culminate with
fully-functional shard history retention leases.
Shard history retention leases are aimed at preventing shard history
consumers from having to fallback to expensive file copy operations if
shard history is not available from a certain point. These consumers
include following indices in cross-cluster replication, and local shard
recoveries. A future consumer will be the changes API.
Further, index lifecycle management requires coordinating with some of
these consumers otherwise it could remove the source before all
consumers have finished reading all operations. The notion of shard
history retention leases that we are introducing here will also be used
to address this problem.
Shard history retention leases are a property of the replication group
managed under the authority of the primary. A shard history retention
lease is a combination of an identifier, a retaining sequence number, a
timestamp indicating when the lease was acquired or renewed, and a
string indicating the source of the lease. Being leases they have a
limited lifespan that will expire if not renewed. The idea of these
leases is that all operations above the minimum of all retaining
sequence numbers will be retained during merges (which would otherwise
clear away operations that are soft deleted). These leases will be
periodically persisted to Lucene and restored during recovery, and
broadcast to replicas under certain circumstances.
This commit is merely putting the basics in place. This first commit
only introduces the concept and integrates their use with the soft
delete retention policy. We add some tests to demonstrate the basic
management is correct, and that the soft delete policy is correctly
influenced by the existence of any retention leases. We make no effort
in this commit to implement any of the following:
- timestamps
- expiration
- persistence to and recovery from Lucene
- handoff during primary relocation
- sharing retention leases with replicas
- exposing leases in shard-level statistics
- integration with cross-cluster replication
These will occur individually in follow-up commits.
This commit addresses an issue when setting a byte size value setting
using a value that has a fractional component when converted to its
string representation. For example, trying to set a byte size value
setting to a value of 1536 bytes is problematic because internally this
is converted to the string "1.5k". When we go to get this setting, we
try to parse "1.5k" back to a byte size value, which does not support
fractional values. The problem is that internally we are relying on a
method which loses the unit when doing the string conversion. Instead,
we are going to use a method that does not lose the unit and therefore
we can roundtrip from the byte size value to the string and back to the
byte size value.
* Randomly doing non-atomic writes causes rare 0 byte reads from `index-N` files in tests
* Removing this randomness fixes these random failures and is valid because it does not reproduce a real-world failure-mode:
* Cloud-based Blob stores (S3, GCS, and Azure) do not have inconsistent partial reads of a blob, either you read a complete blob or nothing on them
* For file system based blob stores the atomic move we do (to atomically write a file) by setting `java.nio.file.StandardCopyOption#ATOMIC_MOVE` would throw if the file system does not provide for atomic moves
* Closes#37005
I started referring to this parameter name from various places in #37149 so I
think it's a good idea to simplify things by referring to a common constant.
We have recently added support for providing a local cluster alias to a
SearchRequest through a package protected constructor. When executing
cross-cluster search requests with local reduction on each cluster, the
CCS coordinating node will have to provide such cluster alias to each
remote cluster, as well as the absolute start time of the search action
in milliseconds from the time epoch, to be used when evaluating date
math expressions both while executing queries / scripts as well as when
resolving index names.
This commit adds support for providing the start time together with the
cluster alias. It is a final member in the search request, which will
only be set when using cross-cluster search with local reduction (also
known as alternate execution mode). When not provided, the coordinating
node will determine the current time and pass it through (by calling
`System.currentTimeMillis`).
Relates to #32125
This pull request changes the Freeze Index and Close Index actions so
that these actions always requires a Task. The task's id is then propagated
from the Freeze action to the Close action, and then to the Verify shard action.
This way it is possible to track which Freeze task initiates the closing of an index,
and which consecutive verifiy shard are executed for the index closing.
As suggested in #36775, this pull request renames the following methods:
ClusterBlocks.hasGlobalBlock(int)
ClusterBlocks.hasGlobalBlock(RestStatus)
ClusterBlocks.hasGlobalBlock(ClusterBlockLevel)
to something that better reflects the property of the ClusterBlock that is searched for:
ClusterBlocks.hasGlobalBlockWithId(int)
ClusterBlocks.hasGlobalBlockWithStatus(RestStatus)
ClusterBlocks.hasGlobalBlockWithLevel(ClusterBlockLevel)
This commit addresses an issue when setting a time value setting using a
value that has a fractional component when converted to its string
representation. For example, trying to set a time value setting to a
value of 1500ms is problematic because internally this is converted to
the string "1.5s". When we go to get this setting, we try to parse
"1.5s" back to a time value, which does not support fractional
values. The problem is that internally we are relying on a method which
loses the unit when doing the string conversion. Instead, we are going
to use a method that does not lose the unit and therefore we can
roundtrip from the time value to the string and back to the time value.
* The retries on the failing master can lead to concurrently trying to create and delete a snapshot, catch this for now to fix this test
* closes#36779
We don't want two FSDirectories manage pending deletes separately
and optimize file listing. This confuses IndexWriter and causes exceptions
when files are deleted twice but are pending for deletion. This change
move to using a NIOFS subclass that only delegates to MMAP for opening files
all metadata and pending deletes are managed on top.
Closes#37111
Relates to #36668
The query object was incorrectly added to the remote object in the
xcontent. This fix moves the query back into the source, if it was
passed in as part of the RemoteInfo. It also adds a IPv6 test for
reindex from remote such that we can properly validate this.
In Lucene 8 searches can skip non-competitive hits if the total hit count is not requested.
It is also possible to track the number of hits up to a certain threshold. This is a trade off to speed up searches while still being able to know a lower bound of the total hit count. This change adds the ability to set this threshold directly in the track_total_hits search option. A boolean value (true, false) indicates whether the total hit count should be tracked in the response. When set as an integer this option allows to compute a lower bound of the total hits while preserving the ability to skip non-competitive hits when enough matches have been collected.
Relates #33028
Today we block using the generic thread-pool on the target side
until the source side has fully executed the recovery. We still
block on the source side executing the recovery in a blocking fashion
but there is no reason to block on the target side. This will
release generic threads early if there are many concurrent recoveries
happen.
Relates to #36195
Today it's very difficult to see which indices are frozen or rather
throttled via the commonly used monitoring APIs. This change adds
a cell to the `_cat/indices` API to render if an index is `search.throttled`
Relates to #34352
With #36997 we added support for providing a local cluster alias with a
`SearchRequest`. We intended to make sure that when provided as part of
a search request, the cluster alias would never be used for connection
lookups. Yet due to a bug we would still end up looking up the
connection from the remote ones.
This commit adds a test to make sure that whenever we set the cluster
alias to the `SearchRequest` (which can only be done at transport), such
alias is used as index prefix in the returned hits. No errors are thrown
despite no remote clusters are configured indicating that such alias is
never used for connection look-ups.
Also, we add explicit support for the empty cluster alias when printing
out index names through `RemoteClusterAware#buildRemoteIndexName`.
In fact we don't want to print out `:index` when the cluster alias is
set to empty string, but rather `index`. Yet, the semantic of empty
string is different compared to `null` as it will still disable final
reduction. This will be used in CCS when searching against remote
clusters as well as the local one, the local one will have empty prefix
yet it will need to disable final reduction so that its results will be
properly merged with the ones coming from the remote clusters.
Today when electing a master in Zen2 we use the cluster state version to
determine whether a node has a fresh-enough cluster state to become master.
However the cluster state version is not a reliable measure of freshness in the
Zen1 world; furthermore in 6.x the cluster state version is not persisted. This
means that when upgrading from 6.x via a full cluster restart a cluster state
update may be lost if a stale master wins the initial election.
This change fixes this by using the metadata version as a measure of freshness
when in term 0, since this is persisted in 6.x and does more reliably indicate
the freshness of nodes.
It also makes changes parallel to elastic/elasticsearch-formal-models#40 to
support situations in which nodes accept cluster state versions in term 0: this
does not happen in a pure Zen2 cluster, but can happen in mixed clusters and
during upgrades.
Now that we unwrap mappings in DocumentMapperParser#extractMappings, it is not
necessary for the mapping definition to always be nested under the type. This
leniency around the mapping format was added in 2341825358.
* The static threadpool leaks a lot of memory in these tests because it prevents things like the connect listeners from `org.elasticsearch.transport.TcpTransport#initiateConnection`
to be GCed between tests (since they keep being referenced by the threadpool) which in turn reference channels and their underlying buffers
* I could not find any slowdown in executing these tests from this change, if anything they are slightly faster now on my machine
* Relates #36906 (which may be caused by slowness from leaking memory and also becomes testable in a loop by this change)
Types can be used both in the source and dest section of the body which will
be translated to search and index requests respectively. Adding a deprecation warning
for those cases and removing examples using more than one type in reindex since
support for this is going to be removed.
The `composite` aggregation uses a TreeMap to keep track of the best buckets.
This ensures a log(n) time cost to insert new buckets but also to retrieve buckets
that are already present in the map. In order to speed up the retrieval of buckets
this change replaces the TreeMap with a priority queue and a HashMap. The insertion
cost is still log(n) but the retrieval of buckets through the HashMap is now done in constant
time. This optimization can bring significant improvement since each document needs
to check if its associated buckets are already present in the current best buckets.
With this commit we rename `node.store.allow_mmapfs` to
`node.store.allow_mmap`. Previously this setting has controlled whether
`mmapfs` could be used as a store type. With the introduction of
`hybridfs` which also relies on memory-mapping,
`node.store.allow_mmapfs` also applies to `hybridfs` and thus we rename
it in order to convey that it is actually used to allow memory-mapping
but not a specific store type.
Relates #36668
Relates #37070
The QueryStringQueryBuilder does not currently delegate to the field mapper's prefixQuery
method, so does not use indexed prefixes. This commit corrects this.
It also fixes a bug where a query a* would not match the word a if indexed prefixes were used with
a minchar setting of 2.
When executing terms aggregations we set the shard_size, meaning the
number of buckets to collect on each shard, to a value that's higher than
the number of requested buckets, to guarantee some basic level of
precision. We have an optimization in place so that we leave shard_size
set to size whenever we are searching against a single shard, in which
case maximum precision is guaranteed by definition.
Such optimization requires us access to the total number of shards that
the search is executing against. In the context of cross-cluster search,
once we will introduce multiple reduction steps (one per cluster) each
cluster will only know the number of local shards, which is problematic
as we should only optimize if we are searching against a single shard in a
single cluster. It could be that we are searching against one shard per cluster
in which case the current code would optimize number of terms causing
a loss of precision.
While discussing how to address the CCS scenario, we decided that we do
not want to introduce further complexity caused by this single shard
optimization, as it benefits only a minority of cases, especially when
the benefits are not so great.
This commit removes the single shard optimization, meaning that we will
always have heuristic enabled on how many number of buckets to collect
on the shards, even when searching against a single shard.
This will cause more buckets to be collected when searching against a single
shard compared to before. If that becomes a problem for some users, they
can work around that by setting the shard_size equal to the size.
Relates to #32125
With #36221 we introduced shards counting to address a rare failure.
This caused a worse problem in this test when replicas were allocated
and shards failures were randomly returned. The latch has to take into
account additional attempts caused by the shard failures, which means
that in order for run to be called, performPhaseOnShard will be called
(numShards + numFailures) times.
To address this, we need to decide upfront which shard is going to fail,
making sure that at least one shards is successful otherwise the whole
request fails.
Closes#37074
With this commit we introduce a new store type `hybridfs` that is a
hybrid between `mmapfs` and `niofs`. This store type chooses different
strategies to read Lucene files based on the read access pattern (random
or linear) in order to optimize performance.
This store type has been available in earlier versions of Elasticsearch
as `default_fs`. We have chosen a different name now in order to convey
the intent of the store type instead of tying it to the fact whether it
is the default choice.
Relates #36668
SearchAsyncActionTests may fail with RejectedExecutionException as InitialSearchPhase may try to execute a runnable after the test has successfully completed, and the corresponding executor was already shut down. The latch was located in getNextPhase that is almost correct, but does not cover for the last finishAndRunNext round that gets executed after onShardResult is invoked.
This commit moves the latch to count the number of shards, and allowing the test to count down later, after finishAndRunNext has been potentially forked. This way nothing else will be executed once the executor is shut down at the end of the tests.
Closes#36221Closes#33699
* Fixes the issue reproduced in the added tests:
* When having open index requests on a shard that are waiting for a refresh, relocating that shard
becomes blocked until that refresh happens (which could be never as in the test scenario).
With #36997 we added the ability to provide a cluster alias with a
SearchRequest.
The next step is to disable the final reduction whenever a cluster alias
is provided with the SearchRequest. A cluster alias will be provided
when executing a cross-cluster search request with alternate execution
mode, where each cluster does its own reduction locally. In order for
the CCS node to be able to later perform an additional reduction of the
results, we need to make sure that all the needed info stays available.
This means that terms aggregations can be reduced but not pruned, and
pipeline aggs should not be executed. The final reduction will happen
later in the CCS coordinating node.
Relates to #36997 & #32125