It is important that folks understand that snapshot/restore isn't
for archiving. It is appropriate for backup and disaster recovery
but not for archival over long periods of time because of version
incompatibility.
Closes#20866
`AbstractSearchAsyncAction` has only been tested in integration tests.
The infrastructure is rather critical and should be tested on a unit-test
level. This change takes the first step.
This changes the CacheBuilder methods that are used to set expiration times to accept a
TimeValue instead of long. Accepting a long can lead to issues where the incorrect value is
passed in as the time unit is not clearly identified. By using TimeValue the caller no longer
needs to worry about the time unit used by the cache or builder.
Before this change the `MultiMatchQuery` called the field types
`termQuery()` with a null context. This is not correct so this change
fixes this so the `MultiMatchQuery` now uses the `ShardQueryContext` it
stores as a field.
Relates to https://github.com/elastic/elasticsearch/pull/20796#pullrequestreview-3606305
Both netty3 and netty4 http implementation printed the default
toString representation of PortRange if ports couldn't be bound.
This commit adds a better default toString method to PortRange and
uses the string representation for the error message in the http
implementations.
The test testDataFileCorruptionDuringRestore expects failures to happen when accessing snapshot data. It would sometimes
fail however as MockRepository (by default) only simulates 100 failures.
Sequence number related data (maximum sequence number, local checkpoint,
and global checkpoint) gets stored in Lucene on each commit. The logical
place to store this data is on each Lucene commit's user commit data
structure (see IndexWriter#setCommitData and the new version
IndexWriter#setLiveCommitData). However, previously we did not store the
maximum sequence number in the commit data because the commit data got
copied over before the Lucene IndexWriter flushed the documents to segments
in the commit. This means that between the time that the commit data was
set on the IndexWriter and the time that the IndexWriter completes the commit,
documents with higher sequence numbers could have entered the commit.
Hence, we would use FieldStats on the _seq_no field in the documents to get
the maximum sequence number value, but this suffers the drawback that if the
last sequence number in the commit corresponded to a delete document action,
that sequence number would not show up in FieldStats as there would be no
corresponding document in Lucene.
In Lucene 6.2, the commit data was changed to take an Iterable interface, so
that the commit data can be calculated and retrieved *after* all documents
have been flushed, while the commit data itself is being set on the Lucene commit.
This commit changes max_seq_no so it is stored in the commit data instead of
being calculated from FieldStats, taking advantage of the deferred calculation
of the max_seq_no through passing an Iterable that dynamically sets the iterator
data.
* improvements to iterating over commit data (and better safety guarantees)
* Adds sequence number and checkpoint testing for document deletion
intertwined with document indexing.
* improve test code slightly
* Remove caching of max_seq_no in commit data iterator and inline logging
* Adds a test for concurrently indexing and committing segments
to Lucene, ensuring the sequence number related commit data
in each Lucene commit point matches the invariants of
localCheckpoint <= highest sequence number in commit <= maxSeqNo
* fix comments
* addresses code review
* adds clarification on checking commit data on recovery from translog
* remove unneeded method
Sometimes it's useful / needed to use unreleased Version constants but we should not add those to the Version.java class for several reasons ie. BWC tests and assertions along those lines. Yet, it's not really obvious how to do that so I added some comments and a simple test for this.
There was an issue with using fuzziness parameter in multi_match query that has
been reported in #18710 and was fixed in Lucene 6.2 that is now used on master.
In order to verify that fix and close the original issue this PR adds the test
from that issue as an integration test.
today we might release a bytes array more than once if the send listener
throws an exception but already has released the array. Yet, this is already fixed
in the BytesArray class we use in production to ensure 3rd party users don't release
twice but our mocks still enforce it.
The snapshot restore state tracks information about shards being restored from a snapshot in the cluster state. For example it records if a shard has been successfully restored or if restoring it was not possible due to a corruption of the snapshot. Recording these events is usually based on changes to the shard routing table, i.e., when a shard is started after a successful restore or failed after an unsuccessful one. As of now, there were two communication channels to transmit recovery failure / success to update the routing table and the restore state. This lead to issues where a shard was failed but the restore state was not updated due to connection issues between data and master node. In some rare situations, this lead to an issue where the restore state could not be properly cleaned up anymore by the master, making it impossible to start new restore operations. The following change updates routing table and restore state in the same cluster state update so that both always stay in sync. It also eliminates the extra communication channel for restore operations and uses standard cluster state listener mechanism to update restore listener upon successful
completion of a snapshot.
* Fixed writeable name from range to geo_distance
* Added testGeoDistanceAggregation
* Added asserts for correct result in testGeoDistanceAggregation
* Setup mapping on test index.
When refactoring DirectCandidateGeneratorBuilder recently, the
ConstructingObjectParser that we have today was not available. Instead we used
some workaround, but it is better to remove this now and use
ConstructingObjectParser instead.
Relates to #18160
Uses the new sorting (#20658) in the `_cat` API to support all use
cases natively. We can still resort to piping things through `sort`
if we need to, but we don't have to for basic stuff like sorting!
* Adding built-in sorting capability to _cat apis.
Closes#16975
* addressing pr comments
* changing value types back to original implementation and fixing cosmetic issues
* Changing compareTo, hashCode of value types to a better implementation
* Changed value compareTos to use Double.compare instead of if statements + fixed some failed unit tests
Shadow replicas can not be simply promoted to primary by updating boolean like normal shards. Instead the are reinitialized and shut down and rebuilt as primaries. Currently we also given them new allocation ids but that throws off the in-sync allocation ids management. This commit changes this behavior to keep the allocation id of the shard.
Closes#20650