This commit adds the ability to configure how a docvalue field should be
formatted, so that it would be possible eg. to return a date field
formatted as the number of milliseconds since Epoch.
Closes#27740
The mutate function in UpdateSettingsRequestStreamableTests did not
guarantee that the masterNodeTimeout and timeout values are definitely
changed and occassionally the randomTimeValue() method would select the
sime time value as the original request which caused a failure.
Enables a rolling restart from the OSS distribution to the x-pack based distribution by preventing
x-pack code from installing custom metadata into the cluster state until all nodes are capable of
deserializing this metadata.
When doing a node restart using the test framework, the restarted node does not only use the
settings provided to the original node, but also additional settings provided by plugin extensions,
which does not correspond to the settings that a node would have on a true restart.
The cluster state acking mechanism currently incorrectly acks cluster state updates that have not
successfully been applied on all nodes. In a situation, for example, where some of the nodes
disconnect during publishing, and don't acknowledge receiving the new cluster state, the user-facing
action (e.g. create index request) will still consider this as an ack.
This is related to #29500. We are removing the ability to disable http
pipelining. This PR removes the references to disabling pipelining in
the integration test case.
The VerifyRepositoryResponse class holds a DiscoveryNode[], but the
nodes themselves are not serialized to a REST API consumer. Since we do
not want to put all of a DiscoveryNode over the wire, be it REST or
Transport since its unused, this change introduces a BWC compatible
change in ser/deser of the Response. Anything 6.4 and above will
read/write a NodeView, and anything prior will read/write a
DiscoveryNode. Further changes to 7.0 will be introduced to remove the
BWC shim and only read/write NodeView, and hold a List<NodeView> as the
VerifyRepositoryResponse internal state.
This is code that was leftover from the move to one shard by
default. Here in index metadata we were preserving the default number of
shards settings independently of the area of code where we set this
value on an index that does not explicitly have an number of shards
setting. This took into consideration the es.index.max_number_of_shards
system property, and was used in search requests to set the default
maximum number of concurrent shard requests. We set the default there
based on the default number of shards so that in a one-node case a
search request could concurrently hit all shards on an index with the
defaults. Now that we default to one shard, we expect fewer shards in
clusters and this adjustment of the node count as the max number of
concurrent shard requests is no longer needed. This commit then changes
the default number of shards settings to be consistent with the value
used when an index is created, and removes the now unneeded adjustment
in search requests.
The new snapshot includes LUCENE-8324 which fixes missing checkpoint
after a fully deletes segment is dropped on flush. This snapshot should
resolves failed tests in the CorruptedFileIT suite.
Closes#30741Closes#30577
This is related to #29500 and #28898. This commit removes the abilitiy
to disable http pipelining. After this commit, any elasticsearch node
will support pipelined requests from a client. Additionally, it extracts
some of the http pipelining work to the server module. This extracted
work is used to implement pipelining for the nio plugin.
We added this limit because we occasionally saw cases where most of the memory
usage of the cache was spent on the keys (ie. queries) rather than the values,
which caused the cache to vastly underestimate its memory usage. In recent
releases, we disabled caching on heavy `terms` queries, which were the main
source of the problem, so putting more entries in the cache should be safer.
The test has an issue that exhibits only super rarely. The test sets the publish
timeout to 0, then proceeds to block cluster state processing on a data node,
then deletes an index and recreates it, and finally removes the cluster state
processing block. Finally, it calls ensureGreen, which might now return before
the data node has fully applied the cluster state that removed and readded the
shard, due to the publish timeout of 0. This commit waits for the cluster state
to be fully processed on the data node before doing the search.
Closes#30718
This change makes sure that an empty completion input does not throw an IAE when indexing.
Instead the input is ignored and the completion field is added in the list of ignored fields
for the document.
Closes#23121
This is related to #27260. The elasticsearch-nio jar is supposed to be
a library opposed to a framework. Currently it internally logs certain
exceptions. This commit modifies it to not rely on logging. Instead
exception handlers are passed by the applications that use the jar.
This commit adds Delete Repository, the associated docs and tests for
the high level REST API client. It also cleans up a seemingly innocuous
line in the RestDeleteRepositoryAction and some naming in SnapshotIT.
Relates #27205
The copy_settings parameter will be removed in Elasticsearch 8.0.0. This
commit adds an assertion message that to clean up this code when master
is bumped to 8.0.0.
Added dedicated script contexts for:
* script function score
* script sorting
* terms_set query
Scripts for these contexts will either have a specific return value or
use scoring and therefor in the future will need their own scripting classes.
Relates to #30511
The getDate() and getDates() existed prior to 5.x on long fields in
scripting. In 5.x, a new Date type for ScriptDocValues was added. The
getDate() and getDates() methods were left on long fields and added to date
fields to ease the transition. This commit removes those methods for
7.0.
Meta plugins existed only for a short time, in order to enable breaking
up x-pack into multiple plugins. However, now that x-pack is no longer
installed as a plugin, the need for them has disappeared. This commit
removes the meta plugins infrastructure.
I still do not like == false. However, I am so use to reading it that
today I read this line of code and could not understand how it could
possibly be doing the right thing. It was only when I finally noticed
the ! that the code made sense. This commit changes this code to be in
our style of == false. I still do not like == false.
Get Settings API changes have now been backported to version 6.4, and
therefore the latest version must send and expect the extra fields when
communicating with 6.4+ code.
Relates #29229#30494
Currently in a rescore request if window_size is smaller than
the top N documents returned (N=size), explanation of scores could be incorrect
for documents that were a part of topN and not part of rescoring.
This PR corrects this, but saving in RescoreContext docIDs of documents
for which rescoring was applied, and adding rescoring explanation
only for these docIDs.
Closes#28725
The camel case name `nGram` should be removed in favour of `ngram` and
similar for `edgeNGram` and `edge_ngram`. Before removal, we need to
deprecate the camel case names first. This change adds deprecation
warnings for indices with versions 6.4.0 and higher and logs deprecation
warnings.
Since #30143, the Cluster State API should always returns the current
cluster_uuid in the response body, regardless of the metrics filters.
This is not exactly true as it is returned only if metadata metrics and
no specific indices are requested.
This commit fixes the behavior to always return the cluster_uuid and
add new test.
This test failed but the cause is not obvious. This commit adds more
debug logging traces so that if it reproduces we could gather more
information.
Related #30577
Date histograms on non-fixed timezones such as `Europe/Paris` proved much slower
than histograms on fixed timezones in #28727. This change mitigates the issue by
using a fixed time zone instead when shard data doesn't cross a transition so
that all timestamps share the same fixed offset. This should be a common case
with daily indices.
NOTE: Rewriting the aggregation doesn't work since the timezone is then also
used on the coordinating node to create empty buckets, which might be out of the
range of data that exists on the shard.
NOTE: In order to be able to get a shard context in the tests, I reused code
from the base query test case by creating a new parent test case for both
queries and aggregations: `AbstractBuilderTestCase`.
Mitigates #28727
This pipeline aggregation gives the user the ability to script functions that "move" across a window
of data, instead of single data points. It is the scripted version of MovingAvg pipeline agg.
Through custom script contexts, we expose a number of convenience methods:
- MovingFunctions.max()
- MovingFunctions.min()
- MovingFunctions.sum()
- MovingFunctions.unweightedAvg()
- MovingFunctions.linearWeightedAvg()
- MovingFunctions.ewma()
- MovingFunctions.holt()
- MovingFunctions.holtWinters()
- MovingFunctions.stdDev()
The user can also define any arbitrary logic via their own scripting, or combine with the above methods.
The TemplateUpgradeService is a system service that allows for plugins
to register templates that need to be upgraded. These template upgrades
should always happen in a system context as they are not a user
initiated action. For security integrations, the lack of running this
in a system context could lead to unexpected failures. The changes in
this commit set an empty system context for the execution of the
template upgrades performed by this service.
Relates #30603
When processing a top-level sibling pipeline, we destructively sublist
the path by assigning back onto the same variable. But if aggs are
specified such:
A. Multi-bucket agg in the first entry of our internal list
B. Regular agg as the immediate child of the multi-bucket in A
C. Regular agg with the same name as B at the top level, listed as the
second entry in our internal list
D. Finally, a pipeline agg with the path down to B
We'll get class cast exception. The first agg will sublist the path
from [A,B] to [B], and then when we loop around to check agg C,
the sublisted path [B] matches the name of C and it fails.
The fix is simple: we just need to store the sublist in a new object
so that the old path remains valid for the rest of the aggs in the loop
Closes#30608
* Fixes IndiceOptionsTests to serialise correctly
Previous to this change `IndicesOptionsTests.testSerialisation()` would
select a complete random version for both the `StreamOutput` and the
`StreamInput`. This meant that the output could be selected as 7.0+
while the input was selected as <7.0 causing the stream to be written
in the new format and read in teh old format (or vica versa). This
change splits the two cases into different test methods ensuring that
the Streams are at least on compatibile versions even if they are on
different versions.
* Use same random version for input and output streams
server/src/test/java/org/elasticsearch/action/support/IndicesOptionsTest
s.java
This change adds a `listTasks` method to the high level java
ClusterClient which allows listing running tasks through the
task management API.
Related to #27205
Allows the setting to be specified using proper array syntax, for example:
"cluster.routing.allocation.awareness.attributes": [ "foo", "bar", "baz" ]
Closes#30617
This commit adds Create Repository, the associated docs and tests
for the high level REST API client. A few small changes to the
PutRepository Request and Response went into the commit as well.
This commit is related to #28898. It adds an nio driven http server
transport. Currently it only supports basic http features. Cors,
pipeling, and read timeouts will need to be added in future PRs.
* Refactor IndicesOptions to not be byte-based
This refactors IndicesOptions to be enum/enummap based rather than using a byte
as a bitmap for each of the options. This is necessary because we'd like to add
additional options, but we ran out of bits.
Backwards compatibility is kept for earlier versions so the option serialization
does not change the options.
Relates sort of to #30188
When we split/shrink an index we open several IndexWriter instances
causeing file-deletes to be pending on windows. This subsequently fails
when we open an IW to bootstrap the index history due to pending deletes.
This change sidesteps the check since we know our history goes forward
in terms of files and segments.
Closes#30416
The order in which double values are added in java can give different results
for the sum, so we need to allow a certain delta in the test assertions. The
current value was still a bit too low, resulting in rare test failures. This
change increases the allowed margin of error by a factor of ten.
In #28255 the implementation of the elasticsearch.keystore was changed
to no longer be built on top of a PKCS#12 keystore. A side effect of
that change was that calling getString or getFile on a closed
KeyStoreWrapper ceased to throw an exception, and would instead return
a value consisting of all 0 bytes.
This change restores the previous behaviour as closely as possible.
It is possible to retrieve the _keys_ from a closed keystore, but any
attempt to get or set the entries will throw an IllegalStateException.
Now that the change to deprecate copy settings and disallow it being
explicitly set to false is backported, this commit adjusts the BWC
versions in master.
Deprecate the use of empty templates. Bug fix allows empty
templates/scripts to be loaded on start up for upgrades/restarts,
but empty templates can no longer be created.
#30423 combined auto-expansion in the same cluster state update where nodes are removed. As
the auto-expansion step would run before deassociating the dead nodes from the routing table, the
auto-expansion would possibly remove replicas from live nodes instead of dead ones. This commit
reverses the order to ensure that when nodes leave the cluster that the auto-expand-replica
functionality only triggers after failing the shards on the removed nodes. This ensures that active
shards on other live nodes are not failed if the primary resided on a now dead node.
Instead, one of the replicas on the live nodes first gets promoted to primary, and the auto-
expansion (removing replicas) only triggers in a follow-up step (but still same cluster state update).
Relates to #30456 and follow-up of #30423
We currently have a separate endpoint for retrieving settings from all indices. We introduced such endpoint when removing comma-separated feature parsing for GetIndicesAction. The RestGetAllSettingsAction duplicates the code to print out the response that we already have in GetSettingsResponse (since it became a ToXContentObject), and uses the get index API internally instead of the get settings API, but the response is the same, hence we can fold get all settings and get settings in a single API, which is what this commit does.
This commit changes the default out-of-the-box configuration for the
number of shards from five to one. We think this will help address a
common problem of oversharding. For users with time-based indices that
need a different default, this can be managed with index templates. For
users with non-time-based indices that find they need to re-shard with
the split API in place they no longer need to resort only to
reindexing.
Since this has the impact of changing the default number of shards used
in REST tests, we want to ensure that we still have coverage for issues
that could arise from multiple shards. As such, we randomize (rarely)
the default number of shards in REST tests to two. This is managed via a
global index template. However, some tests check the templates that are
in the cluster state during the test. Since this template is randomly
there, we need a way for tests to skip adding the template used to set
the number of shards to two. For this we add the default_shards feature
skip. To avoid having to write our docs in a complicated way because
sometimes they might be behind one shard, and sometimes they might be
behind two shards we apply the default_shards feature skip to all docs
tests. That is, these tests will always run with the default number of
shards (one).
The second set of assertions was accidentally using the count's
moving average for the error delta in the value's moving average
assertion. This fixes the typo, and unmutes the test.
Closes#29456
The following tokenizers were moved: classic, edge_ngram,
letter, lowercase, ngram, path_hierarchy, pattern, thai, uax_url_email and
whitespace.
Left keyword tokenizer factory in server module, because
normalizers directly depend on it.This should be addressed on a
follow up change.
Relates to #23658
We want copying settings to be the default behavior. This commit
deprecates not copying settings, and disallows explicitly not copying
settings. This gives users a transition path to the future default
behavior.
These tests failed due to in flight operations on the primary shard.
Sadly, we don't have any clue on those ops. This commit unmutes
these tests and logs the acquirers when checking for ongoing ops.
1> [2018-05-02T23:10:32,145][INFO ][o.e.i.f.FlushIT ] Third
seal: Total shards: [2], failed: [true], reason: [[1] ongoing operations
on primary], detail: []
Relates #29392
The writeBlob method for FsBlobContainer already opens the file with StandardOpenOption.CREATE_NEW, so there's no need for an extra blobExists(blobName) check.
Fixes longitude validation in geo_polygon_query builder. The queries
with wrong longitude currently fail but only later during polygon
with quite complicated error message.
Fixes#30488
The MasterService takes responsibility for timeouts of the AckListeners that it
creates, and the rest of the Discovery subsystem is unaware of these timeouts,
so there's no need for this to appear in the Discovery.AckListener interface.
Also fix a typo in the name of DelegatingAckListener.
This commit removes a test that we can not restore from 1.x and 2.x
repository files. This test is not needed, the version of Elasticsearch
that this commit targets can not even read index files from those
versions.
This commit avoids deadlocks in the cache by removing dangerous places
where we try to take the LRU lock while completing a future. Instead, we
block for the future to complete, and then execute the handling code
under the LRU lock (for example, eviction).
Previously `BulkProcessor` retry logic was based on the exception type of the failed response (`EsRejectedExecutionException`). This commit changes it to be based on the returned status code. This allows us to reproduce the same retry behaviour when the `BulkProcessor` is used from the high-level REST client, which was previously not the case as we cannot rebuild the same exception type when parsing back the response. This change has no effect on the transport client.
Closes#28885
This commit adds the Snapshot Client with a first API call within it,
the get repositories call in snapshot/restore module. This also creates
a snapshot namespace for the docs, as well as get repositories docs.
Relates #27205
Today we can execute cluster API actions on only master, data or ingest nodes
using the `master:true`, `data:true` and `ingest:true` filters, but it is not
so easy to select coordinating-only nodes (i.e. those nodes that are neither
master nor data nor ingest nodes). This change fixes this by adding support for
a `coordinating_only` filter such that `coordinating_only:true` adds all
coordinating-only nodes to the set of selected nodes, and
`coordinating_only:false` deletes them.
Resolves#28831.
Fixes and edge case when using `more_like_this` where TermVectorsWriter
could throw an NPE when a field produced zero tokens after analysis. This
changes the implementation to use an empty list of tokens in this case.
Closes#30148
Auto-expands replicas in the same cluster state update (instead of a follow-up reroute) where nodes are added or removed.
Closes#1873, fixing an issue where nodes drop their copy of auto-expanded data when coming up, only to sync it again later.
Adds verification that geohashes are not empty and contain only
valid characters. It fixes the issue when en empty geohash is
treated as [-180, -90] and geohashes with non-geohash character
are getting resolved into invalid coordinates.
Closes#23579
When deleting or creating a snapshot for a given shard, elasticsearch
usually starts by listing all the existing snapshotted files in the repository.
Then it computes a diff and deletes the snapshotted files that are not
needed anymore. During this deletion, an exception is thrown if the file
to be deleted does not exist anymore.
This behavior is challenging with cloud based repository implementations
like S3 where a file that has been deleted can still appear in the bucket for
few seconds/minutes (because the deletion can take some time to be fully
replicated on S3). If the deleted file appears in the listing of files, then the
following deletion will fail with a NoSuchFileException and the snapshot
will be partially created/deleted.
This pull request makes the deletion of these files a bit less strict, ie not
failing if the file we want to delete does not exist anymore. It introduces a
new BlobContainer.deleteIgnoringIfNotExists() method that can be used
at some specific places where not failing when deleting a file is
considered harmless.
Closes#28322
The test indexes new documents and is thus correct in testing that the response result
is `CREATED`. Sadly we can't guarantee exactly once delivery just yet.
Relates #9967Closes#21658
Today when processing a request for a URL path for which we can not find
a handler we send back a plain-text response. Yet, we have the accept
header in our hand and can respect the accepted media type of the
request. This commit addresses this.
Changes how data is read from CipherInputStream
Instead of using `read()` and checking that the bytes read are what we
expect, use `readFully()` which will read exactly the number of bytes
while keep reading until the end of the stream or throw an
`EOFException` if not all bytes can be read.
This approach keeps the simplicity of using CipherInputStream while
working as expected with both JCE and BCFIPS Security Providers
This PR adds support for the Get Settings API to the java high-level rest client.
Furthermore, logic related to the retrieval of default settings has been moved from the rest layer into the transport layer and now default settings may be retrieved consistency via both the rest API and the transport API.
Upgrade to lucene-7.4.0-snapshot-1ed95c097b
This version contains:
* An Analyzer for Korean
* An IntervalQuery and IntervalsSource that retrieve minimum intervals of positional queries.
* A new API to retrieve matches (offsets and positions) of a query for a single document.
* Support for soft deletes in the index writer.
* A fixed shingle filter that handles index time synonyms.
* Support for emoji sequence in ICUTokenizer (with an upgrade to icu 61.1)
We were recently looking at bugs that can only occur if two different documents were indexed concurrently. For example, what happens if the local checkpoint advances above the sequence number of a document that's being indexed. That can only happen if another concurrent operation caused the checkpoint to advance. It has to be another document to allow concurrency as we acquire a per uid lock.While our investigation proved that the suspected bug doesn't exists, we still discovered our unit testing coverage is not good enough to cover this case.
This PR extend the test concurrent out of order replica processing to use two documents in its history.