This commit renames methods in the PersistentTasksService, to
make obvious that the methods send requests in order to change
the state of persistent tasks.
Relates to #29608.
* Make sure all instance variables are final.
* Make generateKey a private static method, instead of protected.
* Rename formatter -> format for consistency.
* Serialize bucket keys as strings as opposed to optional strings.
* Pull the stream serialization logic for buckets into the Bucket class.
Currently AbstractHttpServerTransport is in a netty4 module. This is the
incorrect location. This commit moves it out of netty4 module.
Additionally, it moves unit tests that test AbstractHttpServerTransport
logic to server.
We failed to register "aliases" and "version" into the list of keywords
in the IndexTemplateMetaData; then fail to parse the following index
template.
```
{
"aliases": {"log": {}},
"index_patterns": ["pattern-1"]
}
```
This commit registers that missing keywords.
The stored scripts API today accepts malformed requests instead of throwing an exception.
This PR deprecates accepting malformed put stored script requests (requests not using the official script format).
Relates to #27612
This change replaces some existing try-finally statements that close resources
in their finally block with the slightly shorter and safer try-with-resources
pattern.
This commit removes the method AllocatedPersistentTask.getState() that
exposes the internal state of an AllocatedPersistentTask and replaces
it with a new isCompleted() method. Related to #29608.
Include size of snapshot in snapshot metadata
Adds difference of number of files (and file sizes) between prev and current snapshot. Total number/size reflects total number/size of files in snapshot.
Closes#18543
The BWC version was previously at 7.0, because the 6.x backport had not
yet landed. Now that it has landed, this commit replaces the BWC compat
with the real version, 6.4.0.
Relates #30762
We sign our official plugins yet this is not well-advertised and not at
all consumed during plugin installation. For plugins that are installed
over the intertubes, verifying that the downloaded artifact is signed by
our signing key would establish both integrity and validity of the
downloaded artifact. The chain of trust here is simple: our installable
artifacts (archive and package distributions) so that if a user trusts
our packages via their signatures, and our plugin installer (which would
be executing trusted code) verifies the downloaded plugin, then the user
can trust the downloaded plugin too. This commit adds verification of
official plugins downloaded during installation. We do not add
verification for offline plugin installs; a user can download our
signatures and verify the artifacts themselves.
This commit also needs to solve a few interesting challenges. One of
these is that we want the bouncy castle JARs on the classpath only for
the plugin installer, but not for the runtime
Elasticsearch. Additionally, we want these JARs to not be present for
the JAR hell checks. To address this, we shift these JARs into a
sub-directory of lib (lib/tools/plugin-cli) that is only loaded for the
plugin installer, and in the plugin installer we filter any JARs in this
directory from the JAR hell check.
The writeTo method of VerifyRepositoryResponse incorrectly used its
local version to determine what it was receiving, rather than the
sender's version. This fixes a bug that ocassionally happened when a 6.4
master node sent data to a 7.0 client, causing the number of bytes to be
improperly read. This also unmutes the test.
Closes#30807
Currently nio and netty modules use the CompletableFuture class for
managing listeners. This is unfortunate as that class accepts
Throwable. This commit adds a class CompletableContext that wraps
the CompletableFuture but does not accept Throwable. This allows the
modification of netty and nio logic to no longer handle Throwable.
Treats geohashes as grid cells instead of just points when the
geohashes are used to specify the edges in the geo_bounding_box
query. For example, if a geohash is used to specify the top_left
corner, the top left corner of the geohash cell will be used as the
corner of the bounding box.
Closes#25154
This commit reworks the way our realms perform caching in order to
limit each principal to a single ongoing authentication per realm. In
other words, this means that multiple requests made by the same user
will not trigger more that one authentication attempt at a time if no
entry has been stored in the cache. If an entry is present in our
cache, there is no restriction on the number of concurrent
authentications performed for this user.
This change enables us to limit the load we place on an external system
like an LDAP server and also preserve resources such as CPU on
expensive operations such as BCrypt authentication.
Closes#30355
We now have a remote cluster client exposed which can
talk to a given remote cluster and manages reconnects etc.
This makes code more readable than using the transport layer directly.
Persistent tasks was moved from X-Pack to core in #28455.
However, registration of the named writables and named
X-content was left in X-Pack.
This change moves the registration of the named writables
and named X-content into core. Additionally, the persistent
task actions are no longer registered in the X-Pack client
plugin, as they are already registered in ActionModule.
Today, the `ClusterApplier` and `MasterService` both use the
`ClusterStateTaskListener` interface to notify their callers when asynchronous
activities have completed. However, this is not wholly appropriate: none of the
callers into the `ClusterApplier` care about the `ClusterState` arguments that
they receive. This change introduces a dedicated ClusterApplyListener
interface for callers into the `ClusterApplier`, to distinguish these listeners
from the real `ClusterStateTaskListener`s that are waiting for responses from
the `MasterService`.
This change adds a simple header to the transport client
that is present on the servers thread context that ensures
we can detect if a transport client talks to the server in a
specific request. This change also adds a header for xpack
to detect if the client has xpack installed.
This commit reintroduces 31251c9 and 63a5799. These commits introduced a
memory leak and were reverted. This commit brings those commits back
and fixes the memory leak by removing unnecessary retain method calls.
This reverts commit 31251c9 introduced in #30695.
We suspect this commit is causing the OOME's reported in #30811 and we will use this PR to test this assertion.
Since its introduction in ES 1.4, node fault detection has been using the wrong cluster state version to send
as part of the ping request, by using always the constant -1 (ClusterState.UNKNOWN_VERSION). This can, in an
unfortunate series of events, lead to a situation where a previous stale master can regain its authority and
revert the cluster to an older state.
This commit makes NodesFaultDetection use the correct current cluster state for sending ping requests, avoiding
the situation where a stale master possibly forces a newer master to step down and rejoin the stale one.
With #30672, acking expects *all* nodes to successfully apply the cluster state.
The testElectMasterWithLatestVersion test was checking for an ack while isolating
one node in the test.
Relates to #30672
This commit adds the ability to configure how a docvalue field should be
formatted, so that it would be possible eg. to return a date field
formatted as the number of milliseconds since Epoch.
Closes#27740
The mutate function in UpdateSettingsRequestStreamableTests did not
guarantee that the masterNodeTimeout and timeout values are definitely
changed and occassionally the randomTimeValue() method would select the
sime time value as the original request which caused a failure.
Enables a rolling restart from the OSS distribution to the x-pack based distribution by preventing
x-pack code from installing custom metadata into the cluster state until all nodes are capable of
deserializing this metadata.
When doing a node restart using the test framework, the restarted node does not only use the
settings provided to the original node, but also additional settings provided by plugin extensions,
which does not correspond to the settings that a node would have on a true restart.
The cluster state acking mechanism currently incorrectly acks cluster state updates that have not
successfully been applied on all nodes. In a situation, for example, where some of the nodes
disconnect during publishing, and don't acknowledge receiving the new cluster state, the user-facing
action (e.g. create index request) will still consider this as an ack.
This is related to #29500. We are removing the ability to disable http
pipelining. This PR removes the references to disabling pipelining in
the integration test case.
The VerifyRepositoryResponse class holds a DiscoveryNode[], but the
nodes themselves are not serialized to a REST API consumer. Since we do
not want to put all of a DiscoveryNode over the wire, be it REST or
Transport since its unused, this change introduces a BWC compatible
change in ser/deser of the Response. Anything 6.4 and above will
read/write a NodeView, and anything prior will read/write a
DiscoveryNode. Further changes to 7.0 will be introduced to remove the
BWC shim and only read/write NodeView, and hold a List<NodeView> as the
VerifyRepositoryResponse internal state.
This is code that was leftover from the move to one shard by
default. Here in index metadata we were preserving the default number of
shards settings independently of the area of code where we set this
value on an index that does not explicitly have an number of shards
setting. This took into consideration the es.index.max_number_of_shards
system property, and was used in search requests to set the default
maximum number of concurrent shard requests. We set the default there
based on the default number of shards so that in a one-node case a
search request could concurrently hit all shards on an index with the
defaults. Now that we default to one shard, we expect fewer shards in
clusters and this adjustment of the node count as the max number of
concurrent shard requests is no longer needed. This commit then changes
the default number of shards settings to be consistent with the value
used when an index is created, and removes the now unneeded adjustment
in search requests.
The new snapshot includes LUCENE-8324 which fixes missing checkpoint
after a fully deletes segment is dropped on flush. This snapshot should
resolves failed tests in the CorruptedFileIT suite.
Closes#30741Closes#30577
This is related to #29500 and #28898. This commit removes the abilitiy
to disable http pipelining. After this commit, any elasticsearch node
will support pipelined requests from a client. Additionally, it extracts
some of the http pipelining work to the server module. This extracted
work is used to implement pipelining for the nio plugin.
We added this limit because we occasionally saw cases where most of the memory
usage of the cache was spent on the keys (ie. queries) rather than the values,
which caused the cache to vastly underestimate its memory usage. In recent
releases, we disabled caching on heavy `terms` queries, which were the main
source of the problem, so putting more entries in the cache should be safer.
The test has an issue that exhibits only super rarely. The test sets the publish
timeout to 0, then proceeds to block cluster state processing on a data node,
then deletes an index and recreates it, and finally removes the cluster state
processing block. Finally, it calls ensureGreen, which might now return before
the data node has fully applied the cluster state that removed and readded the
shard, due to the publish timeout of 0. This commit waits for the cluster state
to be fully processed on the data node before doing the search.
Closes#30718
This change makes sure that an empty completion input does not throw an IAE when indexing.
Instead the input is ignored and the completion field is added in the list of ignored fields
for the document.
Closes#23121
This is related to #27260. The elasticsearch-nio jar is supposed to be
a library opposed to a framework. Currently it internally logs certain
exceptions. This commit modifies it to not rely on logging. Instead
exception handlers are passed by the applications that use the jar.
This commit adds Delete Repository, the associated docs and tests for
the high level REST API client. It also cleans up a seemingly innocuous
line in the RestDeleteRepositoryAction and some naming in SnapshotIT.
Relates #27205
The copy_settings parameter will be removed in Elasticsearch 8.0.0. This
commit adds an assertion message that to clean up this code when master
is bumped to 8.0.0.
Added dedicated script contexts for:
* script function score
* script sorting
* terms_set query
Scripts for these contexts will either have a specific return value or
use scoring and therefor in the future will need their own scripting classes.
Relates to #30511
The getDate() and getDates() existed prior to 5.x on long fields in
scripting. In 5.x, a new Date type for ScriptDocValues was added. The
getDate() and getDates() methods were left on long fields and added to date
fields to ease the transition. This commit removes those methods for
7.0.
Meta plugins existed only for a short time, in order to enable breaking
up x-pack into multiple plugins. However, now that x-pack is no longer
installed as a plugin, the need for them has disappeared. This commit
removes the meta plugins infrastructure.
I still do not like == false. However, I am so use to reading it that
today I read this line of code and could not understand how it could
possibly be doing the right thing. It was only when I finally noticed
the ! that the code made sense. This commit changes this code to be in
our style of == false. I still do not like == false.
Get Settings API changes have now been backported to version 6.4, and
therefore the latest version must send and expect the extra fields when
communicating with 6.4+ code.
Relates #29229#30494
Currently in a rescore request if window_size is smaller than
the top N documents returned (N=size), explanation of scores could be incorrect
for documents that were a part of topN and not part of rescoring.
This PR corrects this, but saving in RescoreContext docIDs of documents
for which rescoring was applied, and adding rescoring explanation
only for these docIDs.
Closes#28725
The camel case name `nGram` should be removed in favour of `ngram` and
similar for `edgeNGram` and `edge_ngram`. Before removal, we need to
deprecate the camel case names first. This change adds deprecation
warnings for indices with versions 6.4.0 and higher and logs deprecation
warnings.
Since #30143, the Cluster State API should always returns the current
cluster_uuid in the response body, regardless of the metrics filters.
This is not exactly true as it is returned only if metadata metrics and
no specific indices are requested.
This commit fixes the behavior to always return the cluster_uuid and
add new test.
This test failed but the cause is not obvious. This commit adds more
debug logging traces so that if it reproduces we could gather more
information.
Related #30577
Date histograms on non-fixed timezones such as `Europe/Paris` proved much slower
than histograms on fixed timezones in #28727. This change mitigates the issue by
using a fixed time zone instead when shard data doesn't cross a transition so
that all timestamps share the same fixed offset. This should be a common case
with daily indices.
NOTE: Rewriting the aggregation doesn't work since the timezone is then also
used on the coordinating node to create empty buckets, which might be out of the
range of data that exists on the shard.
NOTE: In order to be able to get a shard context in the tests, I reused code
from the base query test case by creating a new parent test case for both
queries and aggregations: `AbstractBuilderTestCase`.
Mitigates #28727
This pipeline aggregation gives the user the ability to script functions that "move" across a window
of data, instead of single data points. It is the scripted version of MovingAvg pipeline agg.
Through custom script contexts, we expose a number of convenience methods:
- MovingFunctions.max()
- MovingFunctions.min()
- MovingFunctions.sum()
- MovingFunctions.unweightedAvg()
- MovingFunctions.linearWeightedAvg()
- MovingFunctions.ewma()
- MovingFunctions.holt()
- MovingFunctions.holtWinters()
- MovingFunctions.stdDev()
The user can also define any arbitrary logic via their own scripting, or combine with the above methods.
The TemplateUpgradeService is a system service that allows for plugins
to register templates that need to be upgraded. These template upgrades
should always happen in a system context as they are not a user
initiated action. For security integrations, the lack of running this
in a system context could lead to unexpected failures. The changes in
this commit set an empty system context for the execution of the
template upgrades performed by this service.
Relates #30603
When processing a top-level sibling pipeline, we destructively sublist
the path by assigning back onto the same variable. But if aggs are
specified such:
A. Multi-bucket agg in the first entry of our internal list
B. Regular agg as the immediate child of the multi-bucket in A
C. Regular agg with the same name as B at the top level, listed as the
second entry in our internal list
D. Finally, a pipeline agg with the path down to B
We'll get class cast exception. The first agg will sublist the path
from [A,B] to [B], and then when we loop around to check agg C,
the sublisted path [B] matches the name of C and it fails.
The fix is simple: we just need to store the sublist in a new object
so that the old path remains valid for the rest of the aggs in the loop
Closes#30608
* Fixes IndiceOptionsTests to serialise correctly
Previous to this change `IndicesOptionsTests.testSerialisation()` would
select a complete random version for both the `StreamOutput` and the
`StreamInput`. This meant that the output could be selected as 7.0+
while the input was selected as <7.0 causing the stream to be written
in the new format and read in teh old format (or vica versa). This
change splits the two cases into different test methods ensuring that
the Streams are at least on compatibile versions even if they are on
different versions.
* Use same random version for input and output streams
server/src/test/java/org/elasticsearch/action/support/IndicesOptionsTest
s.java
This change adds a `listTasks` method to the high level java
ClusterClient which allows listing running tasks through the
task management API.
Related to #27205
Allows the setting to be specified using proper array syntax, for example:
"cluster.routing.allocation.awareness.attributes": [ "foo", "bar", "baz" ]
Closes#30617
This commit adds Create Repository, the associated docs and tests
for the high level REST API client. A few small changes to the
PutRepository Request and Response went into the commit as well.
This commit is related to #28898. It adds an nio driven http server
transport. Currently it only supports basic http features. Cors,
pipeling, and read timeouts will need to be added in future PRs.
* Refactor IndicesOptions to not be byte-based
This refactors IndicesOptions to be enum/enummap based rather than using a byte
as a bitmap for each of the options. This is necessary because we'd like to add
additional options, but we ran out of bits.
Backwards compatibility is kept for earlier versions so the option serialization
does not change the options.
Relates sort of to #30188
When we split/shrink an index we open several IndexWriter instances
causeing file-deletes to be pending on windows. This subsequently fails
when we open an IW to bootstrap the index history due to pending deletes.
This change sidesteps the check since we know our history goes forward
in terms of files and segments.
Closes#30416
The order in which double values are added in java can give different results
for the sum, so we need to allow a certain delta in the test assertions. The
current value was still a bit too low, resulting in rare test failures. This
change increases the allowed margin of error by a factor of ten.
In #28255 the implementation of the elasticsearch.keystore was changed
to no longer be built on top of a PKCS#12 keystore. A side effect of
that change was that calling getString or getFile on a closed
KeyStoreWrapper ceased to throw an exception, and would instead return
a value consisting of all 0 bytes.
This change restores the previous behaviour as closely as possible.
It is possible to retrieve the _keys_ from a closed keystore, but any
attempt to get or set the entries will throw an IllegalStateException.
Now that the change to deprecate copy settings and disallow it being
explicitly set to false is backported, this commit adjusts the BWC
versions in master.
Deprecate the use of empty templates. Bug fix allows empty
templates/scripts to be loaded on start up for upgrades/restarts,
but empty templates can no longer be created.
#30423 combined auto-expansion in the same cluster state update where nodes are removed. As
the auto-expansion step would run before deassociating the dead nodes from the routing table, the
auto-expansion would possibly remove replicas from live nodes instead of dead ones. This commit
reverses the order to ensure that when nodes leave the cluster that the auto-expand-replica
functionality only triggers after failing the shards on the removed nodes. This ensures that active
shards on other live nodes are not failed if the primary resided on a now dead node.
Instead, one of the replicas on the live nodes first gets promoted to primary, and the auto-
expansion (removing replicas) only triggers in a follow-up step (but still same cluster state update).
Relates to #30456 and follow-up of #30423
We currently have a separate endpoint for retrieving settings from all indices. We introduced such endpoint when removing comma-separated feature parsing for GetIndicesAction. The RestGetAllSettingsAction duplicates the code to print out the response that we already have in GetSettingsResponse (since it became a ToXContentObject), and uses the get index API internally instead of the get settings API, but the response is the same, hence we can fold get all settings and get settings in a single API, which is what this commit does.
This commit changes the default out-of-the-box configuration for the
number of shards from five to one. We think this will help address a
common problem of oversharding. For users with time-based indices that
need a different default, this can be managed with index templates. For
users with non-time-based indices that find they need to re-shard with
the split API in place they no longer need to resort only to
reindexing.
Since this has the impact of changing the default number of shards used
in REST tests, we want to ensure that we still have coverage for issues
that could arise from multiple shards. As such, we randomize (rarely)
the default number of shards in REST tests to two. This is managed via a
global index template. However, some tests check the templates that are
in the cluster state during the test. Since this template is randomly
there, we need a way for tests to skip adding the template used to set
the number of shards to two. For this we add the default_shards feature
skip. To avoid having to write our docs in a complicated way because
sometimes they might be behind one shard, and sometimes they might be
behind two shards we apply the default_shards feature skip to all docs
tests. That is, these tests will always run with the default number of
shards (one).
The second set of assertions was accidentally using the count's
moving average for the error delta in the value's moving average
assertion. This fixes the typo, and unmutes the test.
Closes#29456
The following tokenizers were moved: classic, edge_ngram,
letter, lowercase, ngram, path_hierarchy, pattern, thai, uax_url_email and
whitespace.
Left keyword tokenizer factory in server module, because
normalizers directly depend on it.This should be addressed on a
follow up change.
Relates to #23658
We want copying settings to be the default behavior. This commit
deprecates not copying settings, and disallows explicitly not copying
settings. This gives users a transition path to the future default
behavior.
These tests failed due to in flight operations on the primary shard.
Sadly, we don't have any clue on those ops. This commit unmutes
these tests and logs the acquirers when checking for ongoing ops.
1> [2018-05-02T23:10:32,145][INFO ][o.e.i.f.FlushIT ] Third
seal: Total shards: [2], failed: [true], reason: [[1] ongoing operations
on primary], detail: []
Relates #29392
The writeBlob method for FsBlobContainer already opens the file with StandardOpenOption.CREATE_NEW, so there's no need for an extra blobExists(blobName) check.
Fixes longitude validation in geo_polygon_query builder. The queries
with wrong longitude currently fail but only later during polygon
with quite complicated error message.
Fixes#30488
The MasterService takes responsibility for timeouts of the AckListeners that it
creates, and the rest of the Discovery subsystem is unaware of these timeouts,
so there's no need for this to appear in the Discovery.AckListener interface.
Also fix a typo in the name of DelegatingAckListener.
This commit removes a test that we can not restore from 1.x and 2.x
repository files. This test is not needed, the version of Elasticsearch
that this commit targets can not even read index files from those
versions.
This commit avoids deadlocks in the cache by removing dangerous places
where we try to take the LRU lock while completing a future. Instead, we
block for the future to complete, and then execute the handling code
under the LRU lock (for example, eviction).
Previously `BulkProcessor` retry logic was based on the exception type of the failed response (`EsRejectedExecutionException`). This commit changes it to be based on the returned status code. This allows us to reproduce the same retry behaviour when the `BulkProcessor` is used from the high-level REST client, which was previously not the case as we cannot rebuild the same exception type when parsing back the response. This change has no effect on the transport client.
Closes#28885
This commit adds the Snapshot Client with a first API call within it,
the get repositories call in snapshot/restore module. This also creates
a snapshot namespace for the docs, as well as get repositories docs.
Relates #27205
Today we can execute cluster API actions on only master, data or ingest nodes
using the `master:true`, `data:true` and `ingest:true` filters, but it is not
so easy to select coordinating-only nodes (i.e. those nodes that are neither
master nor data nor ingest nodes). This change fixes this by adding support for
a `coordinating_only` filter such that `coordinating_only:true` adds all
coordinating-only nodes to the set of selected nodes, and
`coordinating_only:false` deletes them.
Resolves#28831.
Fixes and edge case when using `more_like_this` where TermVectorsWriter
could throw an NPE when a field produced zero tokens after analysis. This
changes the implementation to use an empty list of tokens in this case.
Closes#30148
Auto-expands replicas in the same cluster state update (instead of a follow-up reroute) where nodes are added or removed.
Closes#1873, fixing an issue where nodes drop their copy of auto-expanded data when coming up, only to sync it again later.
Adds verification that geohashes are not empty and contain only
valid characters. It fixes the issue when en empty geohash is
treated as [-180, -90] and geohashes with non-geohash character
are getting resolved into invalid coordinates.
Closes#23579
When deleting or creating a snapshot for a given shard, elasticsearch
usually starts by listing all the existing snapshotted files in the repository.
Then it computes a diff and deletes the snapshotted files that are not
needed anymore. During this deletion, an exception is thrown if the file
to be deleted does not exist anymore.
This behavior is challenging with cloud based repository implementations
like S3 where a file that has been deleted can still appear in the bucket for
few seconds/minutes (because the deletion can take some time to be fully
replicated on S3). If the deleted file appears in the listing of files, then the
following deletion will fail with a NoSuchFileException and the snapshot
will be partially created/deleted.
This pull request makes the deletion of these files a bit less strict, ie not
failing if the file we want to delete does not exist anymore. It introduces a
new BlobContainer.deleteIgnoringIfNotExists() method that can be used
at some specific places where not failing when deleting a file is
considered harmless.
Closes#28322
The test indexes new documents and is thus correct in testing that the response result
is `CREATED`. Sadly we can't guarantee exactly once delivery just yet.
Relates #9967Closes#21658
Today when processing a request for a URL path for which we can not find
a handler we send back a plain-text response. Yet, we have the accept
header in our hand and can respect the accepted media type of the
request. This commit addresses this.
Changes how data is read from CipherInputStream
Instead of using `read()` and checking that the bytes read are what we
expect, use `readFully()` which will read exactly the number of bytes
while keep reading until the end of the stream or throw an
`EOFException` if not all bytes can be read.
This approach keeps the simplicity of using CipherInputStream while
working as expected with both JCE and BCFIPS Security Providers
This PR adds support for the Get Settings API to the java high-level rest client.
Furthermore, logic related to the retrieval of default settings has been moved from the rest layer into the transport layer and now default settings may be retrieved consistency via both the rest API and the transport API.
Upgrade to lucene-7.4.0-snapshot-1ed95c097b
This version contains:
* An Analyzer for Korean
* An IntervalQuery and IntervalsSource that retrieve minimum intervals of positional queries.
* A new API to retrieve matches (offsets and positions) of a query for a single document.
* Support for soft deletes in the index writer.
* A fixed shingle filter that handles index time synonyms.
* Support for emoji sequence in ICUTokenizer (with an upgrade to icu 61.1)
We were recently looking at bugs that can only occur if two different documents were indexed concurrently. For example, what happens if the local checkpoint advances above the sequence number of a document that's being indexed. That can only happen if another concurrent operation caused the checkpoint to advance. It has to be another document to allow concurrency as we acquire a per uid lock.While our investigation proved that the suspected bug doesn't exists, we still discovered our unit testing coverage is not good enough to cover this case.
This PR extend the test concurrent out of order replica processing to use two documents in its history.
The Get Repositories response object held a list of RepositoryMetaData
entries. This object does not have the from/toXContent methods that are
needed to expose this to the high level REST client. The
RepositoriesMetaData, however, does, and it also contains a list of
RepositoryMetaData objects within it. So rather than duplicate this
logic or move it (RepositoriesMetaData is a fragment object used by
cluster state), the object holding state in the Response was changed to
use the RepositoriesMetaData instead. This also cleans up the read/write
methods in the response, as they can now use the same read/write in
RepositoriesMetaData, which also were not present in the singular class.
Fix NPE when CumulativeSum agg encounters null/empty bucket
If the cusum agg encounters a null value, it's because the value is
missing (like the first value from a derivative agg), the path is
not valid, or the bucket in the path was empty.
Previously cusum would just explode on the null, but this changes it
so we only increment the sum if the value is non-null and finite.
This is safe because even if the cusum encounters all null or empty
buckets, the cumulative sum is still zero (like how the sum agg returns
zero even if all the docs were missing values)
I went ahead and tweaked AggregatorTestCase to allow testing pipelines,
so that I could delete the IT test and reimplement it as AggTests.
Closes#27544
This commit removes the http.enabled setting. While all real nodes (started with bin/elasticsearch) will always have an http binding, there are many tests that rely on the quickness of not actually needing to bind to 2 ports. For this case, the MockHttpTransport.TestPlugin provides a dummy http transport implementation which is used by default in ESIntegTestCase.
closes#12792
Suggester Options have a collate match field that is returned when the prune
option is set to true. These values should be merged together in the query
reduce phase, otherwise good suggestions that result in rare hits in shards with
results that do not arrive first may be incorrectly marked as not matching the
collate query.
At the end of recovery, we mark the recovering shard as "in sync" on the primary. From this point on
the primary will treat any replication failure on it as critical and will reach out to the master to fail the
shard. To do so, we wait for the local checkpoint of the recovered shard to be above the global
checkpoint (in order to maintain global checkpoint invariant).
If the master decides to cancel the allocation of the recovering shard while we wait, the method can
currently hang and fail to return. It will also ignore the interrupts that are triggered by the cancelled
recovery due to the primary closing.
Note that this is crucial as this method is called while holding a primary permit. Since the method
never comes back, the permit is never released. The unreleased permit will then block any primary
relocation *and* while the primary is trying to relocate all indexing will be blocked for 30m as it
waits to acquire the missing permit.
The code in `SourceRecoveryHandler` runs under a `CancellableThreads` instance in order to allow long running operations to be interrupted when the recovery is cancelled. Sadly if this happens at just the wrong moment while acquiring a permit from the primary, that primary can be leaked and never be freed.
Note that this is slightly better than it sounds - we only cancel recoveries on the source side if the primary shard itself is closed.
Relates to https://github.com/elastic/elasticsearch/pull/30316
This adds a new `_ignored` meta field which indexes and stores fields that have
been ignored at index time because of the `ignore_malformed` option. It makes
malformed documents easier to identify by using `exists` or `term(s)` queries
on the `_ignored` field.
Closes#29494
* WIP commit to try calling rewrite on coordinating node during TransportSearchAction
* Use re-written query instead of using the original query
* fix incorrect/unused imports and wildcarding
* add error handling for cases where an exception is thrown
* correct exception handling such that integration tests pass successfully
* fix additional case covered by IndicesOptionsIntegrationIT.
* add integration test case that verifies queries are now valid
* add optional value for index
* address review comments: catch superclass of XContentParseException
fixes#29483
The variadic constructor was only used in a few places and the
RepositoriesMetaData class is backed by a List anyway, so just using a
List will make it simpler to instantiate it.
We still don't have a strong reason for the failures of
testDoNotRenewSyncedFlushWhenAllSealed and
testSyncedFlushSkipOutOfSyncReplicas.
This commit adds debug logging for these two tests.
Today when an index is created from shrinking or splitting an existing
index, the target index inherits almost none of the source index
settings. This is surprising and a hassle for operators managing such
indices. Given this is the default behavior, we can not simply change
it. Instead, we start by introducing the ability to copy settings. This
flag can be set on the REST API or on the transport layer and it has the
behavior that it copies all settings from the source except non-copyable
settings (a property of a setting introduced in this
change). Additionally, settings on the request will always override.
This change is the first step in our adventure:
- this flag is added here in 7.0.0 and immediately deprecated
- this flag will be backported to 6.4.0 and remain deprecated
- then, we will remove the ability to set this flag to false in 7.0.0
- finally, in 8.0.0 we will remove this flag and the only behavior will
be for settings to be copied
Just like `ElasticsearchException`, the inner most
`XContentParseException` tends to contain the root cause of the
exception and show be show to the user in the `root_cause` field.
The effectively undoes most of the changes that #29373 made to the
`root_cause` for parsing exceptions. The `type` field still changes from
`parse_exception` to `x_content_parse_exception`, but this seems like a
fairly safe change.
`ElasticsearchWrapperException` *looks* tempting to implement this but
the behavior isn't quite right. `ElasticsearchWrapperExceptions` are
entirely unwrapped until the cause no longer
`implements ElasticsearchWrapperException` but `XContentParseException`
should be unwrapped until its cause is no longer an
`XContentParseException` but no further. In other words,
`ElasticsearchWrapperException` are unwrapped one step too far.
Closes#30261
Remove double if depending on the Result value. It makes little sense to
pass in a boolean flag based on a Result value that we already have,
if that internally is represented again as a `Result` value.
Also changed the `Result` `lowercase` instance member to be computed
based on `name()` instead of `toString()` which is safer and to use
`Locale.ROOT` instead of `Locale.ENGLISH`
Starting with the refactoring in https://github.com/elastic/elasticsearch/pull/22778 (released in 5.3) we may fail to properly replicate operation when a mapping update on master fails. If a bulk
operations needs a mapping update half way, it will send a request to the master before continuing
to index the operations. If that request times out or isn't acked (i.e., even one node in the cluster
didn't process it within 30s), we end up throwing the exception and aborting the entire bulk. This is
a problem because all operations that were processed so far are not replicated any more to the
replicas. Although these operations were never "acked" to the user (we threw an error) it cause the
local checkpoint on the replicas to lag (on 6.x) and the primary and replica to diverge.
This PR does a couple of things:
1) Most importantly, treat *any* mapping update failure as a document level failure, meaning only
the relevant indexing operation will fail.
2) Removes the mapping update callbacks from `IndexShard.applyIndexOperationOnPrimary` and
similar methods for simpler execution. We don't use exceptions any more when a mapping
update was successful.
I think we need to do more work here (the fact that a single slow node can prevent those mappings
updates from being acked and thus fail operations is bad), but I want to keep this as small as I can
(it is already too big).
Currently, the only way to get the REST response for the `/_cluster/state`
call to return the `cluster_uuid` is to request the `metadata` metrics,
which is one of the most expensive response structures. However, external
monitoring agents will likely want the `cluster_uuid` to correlate the
response with other API responses whether or not they want cluster
metadata.
Today when a resize operation is performed, we copy the analysis,
similarity, and sort settings from the source index. It is possible for
the resize request to include additional index settings including
analysis, similarity, and sort settings. We reject sort settings when
validating the request. However, we silently ignore analysis and
similarity settings on the request that are already set on the source
index. Since it is possible to change the analysis and similarity
settings on an existing index, this should be considered a bug and the
sort of leniency that we abhor. This commit addresses this bug by
allowing the request analysis/similarity settings to override the
existing analysis/similarity settings on the target.
The `testDeleteSnapshotWithMissingIndexAndShardMetadata` test uses an
obsolete repository directory structure based on index names instead of
UUIDs. Because it swallows exceptions when deleting test files the test
never failed when the directory structure changed.
This commit fixes the test to use the right directory structure and file
names and to not swallow exceptions anymore.
The REST resize handlers for shrink/split operations are effectively the
same code with a minor difference. This commit collapse these handlers
into a single base class.
This is a code-tidying PR, a little side adventure while working on
another change. Previously only shrink request existed but when the
ability to split indices was added, shrink and split were done together
under a single request object: the resize request object. However, the
code inherited the legacy name in the naming of some variables. This
commit cleans this up.
Since #28049, only fully initialized shards are received write requests.
This enhancement allows us to handle all exceptions. In #28571, we
started strictly handling shard-not-available exceptions and tried to
keep the way we report replication errors to users by only reporting if
the error is not shard-not-available exceptions. However, since then we
unintentionally always log warn for all exception. This change restores
to the previous behavior which logs warn only if an exception is not a
shard-not-available exception.
Relates #28049
Relates #28571
A NullPointerException is thrown when trying to create or delete
a snapshot in a repository that has been written to by an older
Elasticsearch after writing to it with a newer Elasticsearch version.
This is because the way snapshots are formatted in the repository
snapshots index file changed in #24477.
This commit changes the parsing of the repository index file so that
it now detects a corrupted index file and fails early the snapshot
operation.
closes#29052
The global ordinals terms aggregator has an option to remap global ordinals to
dense ordinal that match the request. This mode is automatically picked when the terms
aggregator is a child of another bucket aggregator or when it needs to defer buckets to an
aggregation that is used in the ordering of the terms.
Though when building the final buckets, this aggregator loops over all possible global ordinals
rather than using the hash map that was built to remap the ordinals.
For fields with high cardinality this is highly inefficient and can lead to slow responses even
when the number of terms that match the query is low.
This change fixes this performance issue by using the hash table of matching ordinals to perform
the pruning of the final buckets for the terms and significant_terms aggregation.
I ran a simple benchmark with 1M documents containing 0 to 10 keywords randomly selected among 1M unique terms.
This field is used to perform a multi-level terms aggregation using rally to collect the response times.
The aggregation below is an example of a two-level terms aggregation that was used to perform the benchmark:
```
"aggregations":{
"1":{
"terms":{
"field":"keyword"
},
"aggregations":{
"2":{
"terms":{
"field":"keyword"
}
}
}
}
}
```
| Levels of aggregation | 50th percentile ms (master) | 50th percentile ms (patch) |
| --- | --- | --- |
| 2 | 640.41ms | 577.499ms |
| 3 | 2239.66ms | 600.154ms |
| 4 | 14141.2ms | 703.512ms |
Closes#30117
Clearing the cache indices can be done via GET and POST. As GET should
only support read only operations, this removes the support for using
GET for clearing the indices caches.
Today we update index settings directly via IndexService instead of the
cluster state in IndexServiceTests. However, those changes will be lost
if there is a cluster state update. In general, we should update index
settings via client and limit the direct usage in only special tests.
This commit replaces direct usages by the updateSettings api of client.
Closes#24491
This commit propagates the preference and routing of the original SearchRequest in the ShardSearchRequest.
This information is then use to fix a bug in sliced scrolls when executed with a preference (or a routing).
Instead of computing the slice query from the total number of shards in the index, this commit computes this number from the number of shards per index that participates in the request.
Fixes#27550
Today we always add no-ops to translog regardless of its origin, thus a
noop may appear in the translog multiple times. This is not a big deal
as noops are small and rare to appear.
This commit ensures to add a noop to translog only if its origin is not
from local translog. This restriction has been applied for index and
delete.
This metric previously existed for backwards compatibility reasons
although the suggest stats were folded into search stats. This metric
was deprecated in 6.3.0 and this commit removes them for 7.0.0.
This commit fixes two issues with the byte size value equals/hash code
test.
The first problem is due to a test failure when the original instance is
zero bytes and we pick the mutation branch where we preserve the size
but change the unit. The mutation should result in a different byte size
value but changing the unit on zero bytes still leaves us with zero
bytes.
During the course of fixing this test I discovered another problem. When
we need to randomize size, we could randomly select a size that would
lead to an overflow of Long.MAX_VALUE.
This commit fixes both of these issues.
This commit adds the distribution type to the startup scripts so that we
can discern from log output and the main response the type of the
distribution (deb/rpm/tar/zip).
This commit adds the distribution flavor (default versus oss) to the
build process which is passed through the startup scripts to
Elasticsearch. This change will be used to customize the message on
attempting to install/remove x-pack based on the distribution flavor.
This commit makes x-pack a module and adds it to the default
distrubtion. It also creates distributions for zip, tar, deb and rpm
which contain only oss code.
Adds a check in BlobstoreRepository.snapshot(...) that prevents duplicate snapshot names and fails
the snapshot before writing out the new index file. This ensures that you cannot end up in this
situation where the index file has duplicate names and cannot be read anymore .
Relates to #28906
The suggest stats were folded into the search stats as part of the
indices stats API in 5.0.0. However, the suggest metric remained as a
synonym for the search metric for BWC reasons. This commit deprecates
usage of the suggest metric on the indices stats API.
Similarly, due to the changes to fold the suggest stats into the search
stats, requesting the suggest index metric on the indices metric on the
nodes stats API has produced an empty object as the response since
5.0.0. This commit deprecates this index metric on the indices metric on
the nodes stats API.
This commit implements the ability to remove values from a Cache using
the values iterator. This brings the values iterator in line with the
keys iterator and adds support for removing items in the cache that are
not easily found by the key used for the cache.
Previously we did not put an indexing to a version map if that map does
not require safe access but removed the existing delete tombstone only
if assertion enabled. In #29585, we removed the side-effect caused by
assertion then this test started failing. This failure can be explained
as follows:
- Step 1: Index a doc then delete that doc
- Step 2: The version map can switch to unsafe mode because of
concurrent refreshes (implicitly called by flushes)
- Step 3: Index a document - the version map won't add this version
value and won't prune the tombstone (previously it did)
- Step 4: Delete a document - this will return NOT_FOUND instead of
DELETED because of the stale delete tombstone
This failure is actually fixed by #29619 in which we never leave stale
delete tombstones
Closes#29626
Today the VersionMap does not clean up a stale delete tombstone if it
does not require safe access. However, in a very rare situation due to
concurrent refreshes, the safe-access flag may be flipped over then an
engine accidentally consult that stale delete tombstone.
This commit ensures to never leave stale delete tombstones in a version
map by always pruning delete tombstones when putting a new index entry
regardless of the value of the safe-access flag.
This commit remove serializing of common stats flags via its enum
ordinal and uses an explicit index defined on the enum. This is to
enable us to remove an unused flag (Suggest) without ruining the
ordering and thus breaking serialization.
We removed catched throwable from the code base and left behind was a
comment about catching InternalError in MemoryManagementMXBean. We are
not going to catch InternalError here as we expect that to be
fatal. This commit removes that stale comment.
The name of the bulk thread pool was renamed to "write" with "bulk" as a
fallback name. This change was made in 6.x for BWC reasons yet in 7.0.0
we are removing this fallback. This commit removes this fallback for the
write thread pool.
Today when a version map does not require safe access, we will skip that
document. However, if the assertion is enabled, we remove the delete
tombstone of that document if existed. This side-effect may accidentally
hide bugs in which stale delete tombstone can be accessed.
This change ensures putAssertionMap not modify the tombstone maps.
The camel case name `htmlStip` should be removed in favour of `html_strip`, but
we need to deprecate it first. This change adds deprecation warnings for indices
with version starting with 6.3.0 and logs deprecation warnings in this cases.
This commit renames the bulk thread pool to the write thread pool. This
is to better reflect the fact that the underlying thread pool is used to
execute any document write request (single-document index/delete/update
requests, and bulk requests).
With this change, we add support for fallback settings
thread_pool.bulk.* which will be supported until 7.0.0.
We also add a system property so that the display name of the thread
pool remains as "bulk" if needed to avoid breaking users.