The second set of assertions was accidentally using the count's
moving average for the error delta in the value's moving average
assertion. This fixes the typo, and unmutes the test.
Closes#29456
The following tokenizers were moved: classic, edge_ngram,
letter, lowercase, ngram, path_hierarchy, pattern, thai, uax_url_email and
whitespace.
Left keyword tokenizer factory in server module, because
normalizers directly depend on it.This should be addressed on a
follow up change.
Relates to #23658
We want copying settings to be the default behavior. This commit
deprecates not copying settings, and disallows explicitly not copying
settings. This gives users a transition path to the future default
behavior.
These tests failed due to in flight operations on the primary shard.
Sadly, we don't have any clue on those ops. This commit unmutes
these tests and logs the acquirers when checking for ongoing ops.
1> [2018-05-02T23:10:32,145][INFO ][o.e.i.f.FlushIT ] Third
seal: Total shards: [2], failed: [true], reason: [[1] ongoing operations
on primary], detail: []
Relates #29392
The writeBlob method for FsBlobContainer already opens the file with StandardOpenOption.CREATE_NEW, so there's no need for an extra blobExists(blobName) check.
Fixes longitude validation in geo_polygon_query builder. The queries
with wrong longitude currently fail but only later during polygon
with quite complicated error message.
Fixes#30488
The MasterService takes responsibility for timeouts of the AckListeners that it
creates, and the rest of the Discovery subsystem is unaware of these timeouts,
so there's no need for this to appear in the Discovery.AckListener interface.
Also fix a typo in the name of DelegatingAckListener.
This commit removes a test that we can not restore from 1.x and 2.x
repository files. This test is not needed, the version of Elasticsearch
that this commit targets can not even read index files from those
versions.
This commit avoids deadlocks in the cache by removing dangerous places
where we try to take the LRU lock while completing a future. Instead, we
block for the future to complete, and then execute the handling code
under the LRU lock (for example, eviction).
Previously `BulkProcessor` retry logic was based on the exception type of the failed response (`EsRejectedExecutionException`). This commit changes it to be based on the returned status code. This allows us to reproduce the same retry behaviour when the `BulkProcessor` is used from the high-level REST client, which was previously not the case as we cannot rebuild the same exception type when parsing back the response. This change has no effect on the transport client.
Closes#28885
This commit adds the Snapshot Client with a first API call within it,
the get repositories call in snapshot/restore module. This also creates
a snapshot namespace for the docs, as well as get repositories docs.
Relates #27205
Today we can execute cluster API actions on only master, data or ingest nodes
using the `master:true`, `data:true` and `ingest:true` filters, but it is not
so easy to select coordinating-only nodes (i.e. those nodes that are neither
master nor data nor ingest nodes). This change fixes this by adding support for
a `coordinating_only` filter such that `coordinating_only:true` adds all
coordinating-only nodes to the set of selected nodes, and
`coordinating_only:false` deletes them.
Resolves#28831.
Fixes and edge case when using `more_like_this` where TermVectorsWriter
could throw an NPE when a field produced zero tokens after analysis. This
changes the implementation to use an empty list of tokens in this case.
Closes#30148
Auto-expands replicas in the same cluster state update (instead of a follow-up reroute) where nodes are added or removed.
Closes#1873, fixing an issue where nodes drop their copy of auto-expanded data when coming up, only to sync it again later.
Adds verification that geohashes are not empty and contain only
valid characters. It fixes the issue when en empty geohash is
treated as [-180, -90] and geohashes with non-geohash character
are getting resolved into invalid coordinates.
Closes#23579
When deleting or creating a snapshot for a given shard, elasticsearch
usually starts by listing all the existing snapshotted files in the repository.
Then it computes a diff and deletes the snapshotted files that are not
needed anymore. During this deletion, an exception is thrown if the file
to be deleted does not exist anymore.
This behavior is challenging with cloud based repository implementations
like S3 where a file that has been deleted can still appear in the bucket for
few seconds/minutes (because the deletion can take some time to be fully
replicated on S3). If the deleted file appears in the listing of files, then the
following deletion will fail with a NoSuchFileException and the snapshot
will be partially created/deleted.
This pull request makes the deletion of these files a bit less strict, ie not
failing if the file we want to delete does not exist anymore. It introduces a
new BlobContainer.deleteIgnoringIfNotExists() method that can be used
at some specific places where not failing when deleting a file is
considered harmless.
Closes#28322
The test indexes new documents and is thus correct in testing that the response result
is `CREATED`. Sadly we can't guarantee exactly once delivery just yet.
Relates #9967Closes#21658
Today when processing a request for a URL path for which we can not find
a handler we send back a plain-text response. Yet, we have the accept
header in our hand and can respect the accepted media type of the
request. This commit addresses this.
Changes how data is read from CipherInputStream
Instead of using `read()` and checking that the bytes read are what we
expect, use `readFully()` which will read exactly the number of bytes
while keep reading until the end of the stream or throw an
`EOFException` if not all bytes can be read.
This approach keeps the simplicity of using CipherInputStream while
working as expected with both JCE and BCFIPS Security Providers
This PR adds support for the Get Settings API to the java high-level rest client.
Furthermore, logic related to the retrieval of default settings has been moved from the rest layer into the transport layer and now default settings may be retrieved consistency via both the rest API and the transport API.
Upgrade to lucene-7.4.0-snapshot-1ed95c097b
This version contains:
* An Analyzer for Korean
* An IntervalQuery and IntervalsSource that retrieve minimum intervals of positional queries.
* A new API to retrieve matches (offsets and positions) of a query for a single document.
* Support for soft deletes in the index writer.
* A fixed shingle filter that handles index time synonyms.
* Support for emoji sequence in ICUTokenizer (with an upgrade to icu 61.1)
We were recently looking at bugs that can only occur if two different documents were indexed concurrently. For example, what happens if the local checkpoint advances above the sequence number of a document that's being indexed. That can only happen if another concurrent operation caused the checkpoint to advance. It has to be another document to allow concurrency as we acquire a per uid lock.While our investigation proved that the suspected bug doesn't exists, we still discovered our unit testing coverage is not good enough to cover this case.
This PR extend the test concurrent out of order replica processing to use two documents in its history.
The Get Repositories response object held a list of RepositoryMetaData
entries. This object does not have the from/toXContent methods that are
needed to expose this to the high level REST client. The
RepositoriesMetaData, however, does, and it also contains a list of
RepositoryMetaData objects within it. So rather than duplicate this
logic or move it (RepositoriesMetaData is a fragment object used by
cluster state), the object holding state in the Response was changed to
use the RepositoriesMetaData instead. This also cleans up the read/write
methods in the response, as they can now use the same read/write in
RepositoriesMetaData, which also were not present in the singular class.
Fix NPE when CumulativeSum agg encounters null/empty bucket
If the cusum agg encounters a null value, it's because the value is
missing (like the first value from a derivative agg), the path is
not valid, or the bucket in the path was empty.
Previously cusum would just explode on the null, but this changes it
so we only increment the sum if the value is non-null and finite.
This is safe because even if the cusum encounters all null or empty
buckets, the cumulative sum is still zero (like how the sum agg returns
zero even if all the docs were missing values)
I went ahead and tweaked AggregatorTestCase to allow testing pipelines,
so that I could delete the IT test and reimplement it as AggTests.
Closes#27544
This commit removes the http.enabled setting. While all real nodes (started with bin/elasticsearch) will always have an http binding, there are many tests that rely on the quickness of not actually needing to bind to 2 ports. For this case, the MockHttpTransport.TestPlugin provides a dummy http transport implementation which is used by default in ESIntegTestCase.
closes#12792
Suggester Options have a collate match field that is returned when the prune
option is set to true. These values should be merged together in the query
reduce phase, otherwise good suggestions that result in rare hits in shards with
results that do not arrive first may be incorrectly marked as not matching the
collate query.
At the end of recovery, we mark the recovering shard as "in sync" on the primary. From this point on
the primary will treat any replication failure on it as critical and will reach out to the master to fail the
shard. To do so, we wait for the local checkpoint of the recovered shard to be above the global
checkpoint (in order to maintain global checkpoint invariant).
If the master decides to cancel the allocation of the recovering shard while we wait, the method can
currently hang and fail to return. It will also ignore the interrupts that are triggered by the cancelled
recovery due to the primary closing.
Note that this is crucial as this method is called while holding a primary permit. Since the method
never comes back, the permit is never released. The unreleased permit will then block any primary
relocation *and* while the primary is trying to relocate all indexing will be blocked for 30m as it
waits to acquire the missing permit.
The code in `SourceRecoveryHandler` runs under a `CancellableThreads` instance in order to allow long running operations to be interrupted when the recovery is cancelled. Sadly if this happens at just the wrong moment while acquiring a permit from the primary, that primary can be leaked and never be freed.
Note that this is slightly better than it sounds - we only cancel recoveries on the source side if the primary shard itself is closed.
Relates to https://github.com/elastic/elasticsearch/pull/30316
This adds a new `_ignored` meta field which indexes and stores fields that have
been ignored at index time because of the `ignore_malformed` option. It makes
malformed documents easier to identify by using `exists` or `term(s)` queries
on the `_ignored` field.
Closes#29494