* Deprecate types in count requests.
* Move RestCountAction to the 'search' package.
* Deprecate types in multi search requests.
* Add tests for types deprecation in the _search endpoint.
We've decided that the bulk, delete, get, index, update, and search APIs should not
contain this request parameter, and we will instead accept both typed and typeless calls.
Today if a wildcard, date-math expression or alias expands/resolves
to an index that is search-throttled we still search it. This is likely
not the desired behavior since it can unexpectedly slow down searches
significantly.
This change adds a new indices option that allows `search`, `count`
and `msearch` to ignore throttled indices by default. Users can
force expansion to throttled indices by using `ignore_throttled=true`
on the rest request to expand also to throttled indices.
Relates to #34352
This commit adds a new single value metric aggregation that calculates
the statistic called median absolute deviation, which is a measure of
variability that works on more types of data than standard deviation
Our calculation of MAD is approximated using t-digests. In the collect
phase, we collect each value visited into a t-digest. In the reduce
phase, we merge all value t-digests, then create a t-digest of
deviations using the first t-digest's median and centroids
Deprecates `_source_include` and `_source_exclude` url parameters
in favor of `_source_inclues` and `_source_excludes` because those
are consistent with the rest of Elasticsearch's APIs.
Relates to #22792
This change adds throttling to the update-by-query and delete-by-query cases
similar to throttling for reindex. This mostly means additional methods on the
client class itself, since the request hits the same RestHandler, just with
slightly different endpoints, and also the return values are similar.
Mappings with completion type and multi-fields, were not able to index array or
object format on completion fields. Only string format was supported.
This is fixed by providing multiField parser with externalValueContext with already parsed object
closes#15115
Although we allow to index BigInteger and BigDecimal into a keyword
field, source filtering on these fields would fail
as XContentBuilder was not able to deserialize BigInteger and BigDecimal
to json.
This modifies XContentBuilder to allow to handle BigInteger and
BigDecimal.
Closes#32395
This commit duplicates REST tests for the
- `indices.create`
- `indices.put_mapping`
- `indices.get_mapping`
- `index`
- `get`
- `delete`
- `update`
- `bulk`
APIs, so that we both test them when used without types (include_type_name=false)
and with types, mostly for mixed-version cluster tests.
Given a suite called `X_test_name.yml`, I first copied it to
`(X+1)_test_name_with_types.yml` and then changed `X_test_name.yml` to set
`include_type_name=false` on every API that supports it.
Relates #15613
This commit fixes bwc rest tests for the doc_values format deprecation
in search. The message of the deprecation changed in 6.4.1 so the bwc test
should not check against 6.4.0.
* Improves doc values format deprecation message
This changes the deprecation message when doc values fields do not
supply a format form logging a deprecation warning for each offending
field individually to logging a single message which lists all
offending fields
Closes#33572
* Updates YAML test with new deprecation message
Also adds a test to ensure multiple deprecation warnings are collated
into one message
* Condenses collection of fields without format check
Moves the collection of fields that don't have a format to a separate
loop and moves the logging of the deprecation warning to be next to it
at the expesnse of looping through the field list twice
* fixes typo
* Fixes test
Now that types are unique per mapping we can retrieve the document mapper
without referencing the type. This fixes an NPE when stored fields are disabled.
For 6x we'll need a different fix since mappings can still have multiple types.
Relates #32941
We used to set `maxScore` to `0` within `TopDocs` in situations where there is really no score as the size was set to `0` and scores were not even tracked. In such scenarios, `Float.Nan` is more appropriate, which gets converted to `max_score: null` on the REST layer. That's also more consistent with lucene which set `maxScore` to `Float.Nan` when merging empty `TopDocs` (see `TopDocs#merge`).
Today `_msearch` doesn't allow modifying the `max_concurrent_shard_requests`
per sub search request. This change adds support for setting this parameter on
all sub-search requests in an `_msearch`.
Relates to #31877
* Make cluster stats response contain cluster UUID
* Updating constructor usage in Monitoring tests
* Adding cluster_uuid field to Cluster Stats API reference doc
* Adding rest api spec test for expecting cluster_uuid in cluster stats response
* Adding missing newline
* Indenting do section properly
* Missed a spot!
* Fixing the test cluster ID
Adds a new single-value metrics aggregation that computes the weighted
average of numeric values that are extracted from the aggregated
documents. These values can be extracted from specific numeric
fields in the documents.
When calculating a regular average, each datapoint has an equal "weight"; it
contributes equally to the final value. In contrast, weighted averages
scale each datapoint differently. The amount that each datapoint contributes
to the final value is extracted from the document, or provided by a script.
As a formula, a weighted average is the `∑(value * weight) / ∑(weight)`
A regular average can be thought of as a weighted average where every value has
an implicit weight of `1`.
Closes#15731
* add support for is_write_index in put-alias body parsing
The Rest Put-Alias Action does separate parsing of the alias body
to construct the IndicesAliasesRequest. This extra parsing
was missed in #30703.
* test flag was not just ignored by the parser
* disable backcompat tests
This creates a YAML test "features" that indices if the cluster being
tested has xpack installed (`xpack`) or if it does *not* have xpack
installed (`no_xpack`). It uses those features to centralize skipping
a few tests that fail if xpack is installed.
The plan is to use this in a followup to skip docs tests that require
xpack when xpack is not installed. We *plan* to use the declaration
of required license level on the docs page to generate the required
`skip`.
Closes#30933.
Get Mapping currently throws index not found exception (and returns
404 status code) from the REST layer whenever an index was specified
and no indices have been returned. We should not have this logic in the
REST layer though as only our index resolver should decide whether we
need to throw exceptions or not based on provided indices and corresponding
indices options.
Closes#31485
This commit adds the is-write-index flag for aliases.
It allows requests to set the flag, and responses to display the flag.
It does not validate and/or affect any indexing/getting/updating behavior
of Elasticsearch -- this will be done in a follow-up PR.
Add a `NodeSelector` so that users can filter the nodes that receive
requests based on node attributes.
I believe we'll need this to backport #30523 and we want it anyway.
I also added a bash script to help with rebuilding the sniffer parsing
test documents.
Allows users of the Low Level REST client to specify which hosts a
request should be run on. They implement the `NodeSelector` interface
or reuse a built in selector like `NOT_MASTER_ONLY` to chose which nodes
are valid. Using it looks like:
```
Request request = new Request("POST", "/foo/_search");
RequestOptions options = request.getOptions().toBuilder();
options.setNodeSelector(NodeSelector.NOT_MASTER_ONLY);
request.setOptions(options);
...
```
This introduces a new `Node` object which contains a `HttpHost` and the
metadata about the host. At this point that metadata is just `version`
and `roles` but I plan to add node attributes in a followup. The
canonical way to **get** this metadata is to use the `Sniffer` to pull
the information from the Elasticsearch cluster.
I've marked this as "breaking-java" because it breaks custom
implementations of `HostsSniffer` by renaming the interface to
`NodesSniffer` and by changing it from returning a `List<HttpHost>` to a
`List<Node>`. It *shouldn't* break anyone else though.
Because we expect to find it useful, this also implements `host_selector`
support to `do` statements in the yaml tests. Using it looks a little
like:
```
---
"example test":
- skip:
features: host_selector
- do:
host_selector:
version: " - 7.0.0" # same syntax as skip
apiname:
something: true
```
The `do` section parses the `version` string into a host selector that
uses the same version comparison logic as the `skip` section. When the
`do` section is executed it passed the off to the `RestClient`, using
the `ElasticsearchHostsSniffer` to sniff the required metadata.
The idea is to use this in mixed version tests to target a specific
version of Elasticsearch so we can be sure about the deprecation
logging though we don't currently have any examples that need it. We do,
however, have at least one open pull request that requires something
like this to properly test it.
Closes#21888
With `max_concurrent_shard_requests` we used to throttle / limit
the number of concurrent shard requests a high level search request
can execute per node. This had several problems since it limited the
number on a global level based on the number of nodes. This change
now throttles the number of concurrent requests per node while still
allowing concurrency across multiple nodes.
Closes#31192
The following analyzers were moved from server module to analysis-common module:
`snowball`, `arabic`, `armenian`, `basque`, `bengali`, `brazilian`, `bulgarian`,
`catalan`, `chinese`, `cjk`, `czech`, `danish`, `dutch`, `english`, `finnish`,
`french`, `galician` and `german`.
Relates to #23658
* Fix index prefixes to work with span_multi
Text fields that use `index_prefixes` can rewrite `prefix` queries into
`term` queries internally. This commit fix the handling of this rewriting
in the `span_multi` query.
This change also copies the index options of the text field into the
prefix field in order to be able to run positional queries. This is mandatory
for `span_multi` to work but this could also be useful to optimize `match_phrase_prefix`
queries in a follow up. Note that this change can only be done on indices created
after 6.3 since we set the index options to doc only in this version.
Fixes#31056
Specifying `index_phrases: true` on a text field mapping will add a subsidiary
[field]._index_phrase field, indexing two-term shingles from the parent field.
The parent analysis chain is re-used, wrapped with a FixedShingleFilter.
At query time, if a phrase match query is executed, the mapping will redirect it
to run against the subsidiary field.
This should trade faster phrase querying for a larger index and longer indexing
times.
Relates to #27049
Since master will always communicate with a >=6.4 node, the logic for
checking if the node is 6.4 and conditionally reading and writing based
on that can be removed from master. This logic will stay in 6.x as it is
the bridge to the cleaner response in master. This also unmutes the
failing test due to this bwc change.
Closes#30807
This change deprecates completion queries and documents without context that target a
context enabled completion field. Querying without context degrades the search
performance considerably (even when the number of indexed contexts is low).
This commit targets master but the deprecation will take place in 6.x and the functionality
will be removed in 7 in a follow up.
Closes#29222
This change adds a new option to the composite aggregation named `missing_bucket`.
This option can be set by source and dictates whether documents without a value for the
source should be ignored. When set to true, documents without a value for a field emits
an explicit `null` value which is then added in the composite bucket.
The `missing` option that allows to set an explicit value (instead of `null`) is deprecated in this change and will be removed in a follow up (only in 7.x).
This commit also changes how the big arrays are allocated, instead of reserving
the provided `size` for all sources they are created with a small intial size and they grow
depending on the number of buckets created by the aggregation:
Closes#29380
Include size of snapshot in snapshot metadata
Adds difference of number of files (and file sizes) between prev and current snapshot. Total number/size reflects total number/size of files in snapshot.
Closes#18543
This PR breaks the include_defaults functionality of the get settings API into its own
test, which is skipped for mixed-mode clusters containing pre-6.4 nodes.
The writeTo method of VerifyRepositoryResponse incorrectly used its
local version to determine what it was receiving, rather than the
sender's version. This fixes a bug that ocassionally happened when a 6.4
master node sent data to a 7.0 client, causing the number of bytes to be
improperly read. This also unmutes the test.
Closes#30807
This commit adds the ability to configure how a docvalue field should be
formatted, so that it would be possible eg. to return a date field
formatted as the number of milliseconds since Epoch.
Closes#27740
Currently in a rescore request if window_size is smaller than
the top N documents returned (N=size), explanation of scores could be incorrect
for documents that were a part of topN and not part of rescoring.
This PR corrects this, but saving in RescoreContext docIDs of documents
for which rescoring was applied, and adding rescoring explanation
only for these docIDs.
Closes#28725
Since #30143, the Cluster State API should always returns the current
cluster_uuid in the response body, regardless of the metrics filters.
This is not exactly true as it is returned only if metadata metrics and
no specific indices are requested.
This commit fixes the behavior to always return the cluster_uuid and
add new test.
This pipeline aggregation gives the user the ability to script functions that "move" across a window
of data, instead of single data points. It is the scripted version of MovingAvg pipeline agg.
Through custom script contexts, we expose a number of convenience methods:
- MovingFunctions.max()
- MovingFunctions.min()
- MovingFunctions.sum()
- MovingFunctions.unweightedAvg()
- MovingFunctions.linearWeightedAvg()
- MovingFunctions.ewma()
- MovingFunctions.holt()
- MovingFunctions.holtWinters()
- MovingFunctions.stdDev()
The user can also define any arbitrary logic via their own scripting, or combine with the above methods.
When we split/shrink an index we open several IndexWriter instances
causeing file-deletes to be pending on windows. This subsequently fails
when we open an IW to bootstrap the index history due to pending deletes.
This change sidesteps the check since we know our history goes forward
in terms of files and segments.
Closes#30416
Now that the change to deprecate copy settings and disallow it being
explicitly set to false is backported, this commit adjusts the BWC
versions in master.
This commit changes the default out-of-the-box configuration for the
number of shards from five to one. We think this will help address a
common problem of oversharding. For users with time-based indices that
need a different default, this can be managed with index templates. For
users with non-time-based indices that find they need to re-shard with
the split API in place they no longer need to resort only to
reindexing.
Since this has the impact of changing the default number of shards used
in REST tests, we want to ensure that we still have coverage for issues
that could arise from multiple shards. As such, we randomize (rarely)
the default number of shards in REST tests to two. This is managed via a
global index template. However, some tests check the templates that are
in the cluster state during the test. Since this template is randomly
there, we need a way for tests to skip adding the template used to set
the number of shards to two. For this we add the default_shards feature
skip. To avoid having to write our docs in a complicated way because
sometimes they might be behind one shard, and sometimes they might be
behind two shards we apply the default_shards feature skip to all docs
tests. That is, these tests will always run with the default number of
shards (one).
The following tokenizers were moved: classic, edge_ngram,
letter, lowercase, ngram, path_hierarchy, pattern, thai, uax_url_email and
whitespace.
Left keyword tokenizer factory in server module, because
normalizers directly depend on it.This should be addressed on a
follow up change.
Relates to #23658
We want copying settings to be the default behavior. This commit
deprecates not copying settings, and disallows explicitly not copying
settings. This gives users a transition path to the future default
behavior.
there are two tests that have failed multiple times in one day on windows CI.
This commit AwaitsFixes them until their timeout issues are resolved.
tracking here: https://github.com/elastic/elasticsearch/issues/30503
That PR changed the execution path of index settings default to be on the master
until the PR is back-ported the old master will not return default settings.
This PR adds support for the Get Settings API to the java high-level rest client.
Furthermore, logic related to the retrieval of default settings has been moved from the rest layer into the transport layer and now default settings may be retrieved consistency via both the rest API and the transport API.
Many tests are added with a version check so that they do not run against a
version that doesn't have the feature yet. Master is 7.0, so all tests that
do not run against 6.0+ can be removed and the version check can be removed
on all tests that always run on 6.0+.
This adds a new `_ignored` meta field which indexes and stores fields that have
been ignored at index time because of the `ignore_malformed` option. It makes
malformed documents easier to identify by using `exists` or `term(s)` queries
on the `_ignored` field.
Closes#29494
Today when an index is created from shrinking or splitting an existing
index, the target index inherits almost none of the source index
settings. This is surprising and a hassle for operators managing such
indices. Given this is the default behavior, we can not simply change
it. Instead, we start by introducing the ability to copy settings. This
flag can be set on the REST API or on the transport layer and it has the
behavior that it copies all settings from the source except non-copyable
settings (a property of a setting introduced in this
change). Additionally, settings on the request will always override.
This change is the first step in our adventure:
- this flag is added here in 7.0.0 and immediately deprecated
- this flag will be backported to 6.4.0 and remain deprecated
- then, we will remove the ability to set this flag to false in 7.0.0
- finally, in 8.0.0 we will remove this flag and the only behavior will
be for settings to be copied
Currently, the only way to get the REST response for the `/_cluster/state`
call to return the `cluster_uuid` is to request the `metadata` metrics,
which is one of the most expensive response structures. However, external
monitoring agents will likely want the `cluster_uuid` to correlate the
response with other API responses whether or not they want cluster
metadata.
Clearing the cache indices can be done via GET and POST. As GET should
only support read only operations, this removes the support for using
GET for clearing the indices caches.
This commit renames the bulk thread pool to the write thread pool. This
is to better reflect the fact that the underlying thread pool is used to
execute any document write request (single-document index/delete/update
requests, and bulk requests).
With this change, we add support for fallback settings
thread_pool.bulk.* which will be supported until 7.0.0.
We also add a system property so that the display name of the thread
pool remains as "bulk" if needed to avoid breaking users.
Added an api that allows to execute an arbitrary script and a result to be returned.
```
POST /_scripts/painless/_execute
{
"script": {
"source": "params.var1 / params.var2",
"params": {
"var1": 1,
"var2": 1
}
}
}
```
Relates to #27875
Now that single-document indexing requests are executed on the bulk
thread pool the index thread pool is no longer needed. This commit
removes this thread pool from Elasticsearch.
This commit add the `include_type_name` option to the `index`, `update`,
`delete`, `get`, `bulk` and `search` APIs. When set to `false`, the response
will omit the `_type` in the response. This option doesn't work if the endpoint
contains a type. For instance, the following call would succeed:
```
GET index/_doc/1?include_type_name=false
```
But the following one would fail:
```
GET index/some_type/1?include_type_name=false
```
Relates #15613
CRUD: Parsing changes for UpdateRequest (#29293)
Use `ObjectParser` to parse `UpdateRequest` so we reject unknown fields
and drop support for the `_fields` parameter because it was deprecated
in 5.x.
Unlike the `indices.create`, `indices.get_mapping` and `indices.put_mapping`
APIs, the index APIs do not need the `include_type_name` option, they can work
work with and without types withouth knowing whether types are being used.
Internally, `_doc` is used as a type if no type is provided, like for the
`indices.put_mapping` API.
Currently, a flush stats contains only the total flush which is the sum
of manual flush (via API) and periodic flush (async triggered when the
uncommitted translog size is exceeded the flush threshold). Sometimes,
it's useful to know these two numbers independently. This commit tracks
and returns a periodic flush count in a flush stats.
This adds an `include_type_name` option to the `indices.create`,
`indices.get_mapping` and `indices.put_mapping` APIs, which defaults to `true`.
When set to `false`, then mappings will be returned directly in the body of
the `indices.get_mapping` API, without keying them by the type name, the
`indices.create` will expect mappings directly under the `mappings` key, and
the `indices.put_mapping` will use `_doc` as a type name and fail if a `type`
is provided explicitly.
Relates #15613
Some features have been deprecated since `6.0` like the `_parent` field or the
ability to have multiple types per index. This allows to remove quite some
code, which in-turn will hopefully make it easier to proceed with the removal
of types.
* Move ObjectParser into the x-content lib
This moves `ObjectParser`, `AbstractObjectParser`, and
`ConstructingObjectParser` into the libs/x-content dependency. This decoupling
allows them to be used for parsing for projects that don't want to depend on the
entire Elasticsearch jar.
Relates to #28504
Currently the ranking evaluation API doesn't support many of the
standard parameters of the search API. Some of these make sense, like
adding support for the common indices options parameters, which this
change adds.
Today we report thread pool info using a common object. This means that
we use a shared set of terminology that is not consistent with the
terminology used to the configure thread pools. This holds in particular
for the minimum and maximum number of threads in the thread pool where
we use the following terminology:
thread pool info | fixed | scaling
min core size
max max size
A previous change addressed this for the nodes info API. This commit
changes the display of thread pool info in the cat thread pool API too
to be dependent on the type of the thread pool so that we can align the
terminology in the output of thread pool info with the terminology used
to configure a thread pool.
Some source files seem to have the execute bit (a+x) set, which doesn't
really seem to hurt but is a bit odd. This change removes those, making
the permissions similar to other source files in the repository.
This change refactors the composite aggregation to add an execution mode that visits documents in the order of the values
present in the leading source of the composite definition. This mode does not need to visit all documents since it can early terminate
the collection when the leading source value is greater than the lowest value in the queue.
Instead of collecting the documents in the order of their doc_id, this mode uses the inverted lists (or the bkd tree for numerics) to collect documents
in the order of the values present in the leading source.
For instance the following aggregation:
```
"composite" : {
"sources" : [
{ "value1": { "terms" : { "field": "timestamp", "order": "asc" } } }
],
"size": 10
}
```
... can use the field `timestamp` to collect the documents with the 10 lowest values for the field instead of visiting all documents.
For composite aggregation with more than one source the execution can early terminate as soon as one of the 10 lowest values produces enough
composite buckets. For instance if visiting the first two lowest timestamp created 10 composite buckets we can early terminate the collection since it
is guaranteed that the third lowest timestamp cannot create a composite key that compares lower than the one already visited.
This mode can execute iff:
* The leading source in the composite definition uses an indexed field of type `date` (works also with `date_histogram` source), `integer`, `long` or `keyword`.
* The query is a match_all query or a range query over the field that is used as the leading source in the composite definition.
* The sort order of the leading source is the natural order (ascending since postings and numerics are sorted in ascending order only).
If these conditions are not met this aggregation visits each document like any other agg.
By the time the master branch is released the deprecated url
parameters in the `/_cache/clear` API will have been deprecated
for a couple of minor releases. Since master will be the next
major release we are fine with removing these parameters.
* Add a REST integration test that documents date_range support
Add a test case that exercises date_range aggregations using the missing
option.
Addresses #17597
* Test cleanup and correction
Adding a document with a null date to exercise `missing` option, update
test name to something reasonable.
* Update documentation to explain how the "missing" parameter works for
date_range aggregations.
* Wrap lines at 80 chars in docs.
* Change format of test to YAML for readability.
This reverts commit f057fc294a.
The rescorer does not resort the collapsed values inside the top docs
during rescoring. For this reason the Lucene rescorer is not compatible
with collapsing.
Relates #27243
* Reject regex search if regex string is too long (#28344)
* Add docs
* Introduce index level setting `index.max_regex_length`
to control the maximum length of the regular expression
Closes#28344
We added a rest test for the translog last modified age without a
version check. This causes BWC failed because the stats are not
available in the old versions.
Relates #28613
Currently the Translog constructor is capable both of opening an existing translog and creating a
new one (deleting existing files). This PR separates these two into separate code paths. The
constructors opens files and a dedicated static methods creates an empty translog.
Adds allow_partial_search_results flag to search requests with default setting = true.
When false, will error if search either timeouts, has partial errors or has missing shards rather
than returning partial search results. A cluster-level setting provides a default for search requests with no flag.
Closes#27435
This adds the ability to index term prefixes into a hidden subfield, enabling prefix queries to be run without multitermquery rewrites. The subfield reuses the analysis chain of its parent text field, appending an EdgeNGramTokenFilter. It can be configured with minimum and maximum ngram lengths. Query terms with lengths outside this min-max range fall back to using prefix queries against the parent text field.
The mapping looks like this:
"my_text_field" : {
"type" : "text",
"analyzer" : "english",
"index_prefix" : { "min_chars" : 1, "max_chars" : 10 }
}
Relates to #27049
This change adds the `after_key` of a composite aggregation directly in the response.
It is redundant when all buckets are not filtered/removed by a pipeline aggregation since in this case the `after_key` is always the last bucket
in the response. Though when using a pipeline aggregation to filter composite buckets, the `after_key` can be lost if the last bucket is filtered.
This commit fixes this situation by always returning the `after_key` in a dedicated section.
This commit adds the ability to specify a date format on the `date_histogram` composite source.
If the format is defined, the key for the source is returned as a formatted date.
Closes#27923
MixedClusterClientYamlTestSuiteIT sometimes fails when executing the
indices.stats/13_fields/* REST tests. It does not reproduce locally
but the execution logs show that it failed when a shard is relocating
during the set up execution. This commit change the set up so that it
now waits for all shards to be active before executing the tests.
closes#26732, #27146
Currently when adding a document with a `null` value for a range field,
the range field mapper raises an error. Instead we should ignore null like
we do eg. with numbers or geo points.
Closes#27845
Java 9 added some enhancements to the internationalization support that
impact our date parsing support. To ensure flawless BWC and consistent
behavior going forward Java 9 runtimes requrie the system property
`java.locale.providers=COMPAT` to be set.
Closes#10984
- Introduce index level settings to control the maximum number of terms
that can be used in a Terms Query
- Throw an error if a request exceeds this max number
Closes#18829
This change fixes a bug when a keyword term in the `after` key is not present in the shard.
In this case the global ord of the document values are compared with the insertion point of the
`after` keyword and values that are equal to the insertion point should be considered "after" the top value.
* Limit the analyzed text for highlighting
- Introduce index level settings to control the max number of character
to be analyzed for highlighting
- Throw an error if analysis is required on a larger text
Closes#27517
Previously to this change when DocStats are added together (for example when adding the index size of all primary shards for an index) we naively added the `totalSizeInBytes` together. This worked most of the time but not when the index size on one or multiple shards was reported to be `-1` (no value).
This change improves the logic by considering if the current value or the value to be added is `-1`:
* If the current and new value are both `-1` the value remains at `-1`
* If the current value is `-1` and the new value is not `-1`, current value is changed to be equal to the new value
* If the current value is not `-1` and the new value is `-1` the new value is ignored and the current value is not changed
* If both the current and new values are not `-1` the current value is changed to be equal to the sum of the current and new values.
The change also re-enables the failing rollover YAML test that was failing due to this bug.
Currently FiltersAggregationBuilder#doRewrite creates a new FiltersAggregationBuilder which doesn't correctly copy the original "keyed" field if a non-keyed filter gets rewritten.
This can cause rendering bugs of the output aggregations like the one reported in #27841.
Closes#27841
This commit moves the range field mapper back to core so that we can
remove the compile-time dependency of percolator on mapper-extras which
compilcates dependency management for the percolator client JAR, and
modules should not be intertwined like this anyway.
Relates #27854
This commit adds a new dynamic cluster setting named `search.max_buckets` that can be used to limit the number of buckets created per shard or by the reduce phase. Each multi bucket aggregator can consume buckets during the final build of the aggregation at the shard level or during the reduce phase (final or not) in the coordinating node. When an aggregator consumes a bucket, a global count for the request is incremented and if this number is greater than the limit an exception is thrown (TooManyBuckets exception).
This change adds the ability for multi bucket aggregator to "consume" buckets in the global limit, the default is 10,000. It's an opt-in consumer so each multi-bucket aggregator must explicitly call the consumer when a bucket is added in the response.
Closes#27452#26012
Add an index level setting `index.analyze.max_token_count` to control
the number of generated tokens in the _analyze endpoint.
Defaults to 10000.
Throw an error if the number of generated tokens exceeds this limit.
Closes#27038
Also include _type and _id for parent/child hits inside inner hits.
In the case of top_hits aggregation the nested search hits are
directly returned and are not grouped by a root or parent document, so
it is important to include the _id and _index attributes in order to know
to what documents these nested search hits belong to.
Closes#27053
Today we require users to prepare their indices for split operations.
Yet, we can do this automatically when an index is created which would
make the split feature a much more appealing option since it doesn't have
any 3rd party prerequisites anymore.
This change automatically sets the number of routinng shards such that
an index is guaranteed to be able to split once into twice as many shards.
The number of routing shards is scaled towards the default shard limit per index
such that indices with a smaller amount of shards can be split more often than
larger ones. For instance an index with 1 or 2 shards can be split 10x
(until it approaches 1024 shards) while an index created with 128 shards can only
be split 3x by a factor of 2. Please note this is just a default value and users
can still prepare their indices with `index.number_of_routing_shards` for custom
splitting.
NOTE: this change has an impact on the document distribution since we are changing
the hash space. Documents are still uniformly distributed across all shards but since
we are artificually changing the number of buckets in the consistent hashign space
document might be hashed into different shards compared to previous versions.
This is a 7.0 only change.
Add an index level setting `index.mapping.nested_objects.limit` to control
the number of nested json objects that can be in a single document
across all fields. Defaults to 10000.
Throw an error if the number of created nested documents exceed this
limit during the parsing of a document.
Closes#26962
This change removes the module named aggs-composite and adds the `composite` aggs
as a core aggregation. This allows other plugins to use this new aggregation
and simplifies the integration in the HL rest client.
While we have an assertion that checks if the number of routing shards is a multiple
of the number of shards we need a real hard exception that checks this way earlier.
This change adds a check and test that is executed before we create the index.
Relates to #26931
Today Cross Cluster Search requires at least one node in each remote cluster to be up once the cross cluster search is run. Otherwise the whole search request fails despite some of the data (either local and/or remote) is available. This happens when performing the _search/shards calls to find out which remote shards the query has to be executed on. This scenario is different from shard failures that may happen later on when the query is actually executed, in case e.g. remote shards are missing, which is not going to fail the whole request but rather yield partial results, and the _shards section in the response will indicate that.
This commit introduces a boolean setting per cluster called search.remote.$cluster_alias.skip_if_disconnected, set to false by default, which allows to skip certain clusters if they are down when trying to reach them through a cross cluster search requests. By default all clusters are mandatory.
Scroll requests support such setting too when they are first initiated (first search request with scroll parameter), but subsequent scroll rounds (_search/scroll endpoint) will fail if some of the remote clusters went down meanwhile.
The search API response contains now a new _clusters section, similar to the _shards section, that gets returned whenever one or more clusters were disconnected and got skipped:
"_clusters" : {
"total" : 3,
"successful" : 2,
"skipped" : 1
}
Such section won't be part of the response if no clusters have been skipped.
The per cluster skip_unavailable setting value has also been added to the output of the remote/info API.
Today we index dummy values for seq_ids and version on nested documents.
This is on the one hand trappy since users can request these values via
inner hits and on the other hand not necessarily good for compression since
the dummy value will likely not compress well when seqIDs are lowish.
This change ensures that we share the same field values for all documents in a
nested block. This won't have any overhead, in-fact it might be more efficient since
we even reduce the work needed slightly.
Stardardize underscore requirements in parameters across different type of
requests:
_index, _type, _source, _id keep their underscores
params like version and retry_on_conflict will be without underscores
Throw an error if older versions of parameters are used
BulkRequest, MultiGetRequest, TermVectorcRequest, MoreLikeThisQuery
were changed
Closes#26886
The default value for ignore_unavailable did not match what was documented when using the REST APIs for snapshot creation and restore. This commit sets the default value of ignore_unavailable to false, the way it is documented and ensures it's the same when using either REST API or transport client.
Closes#25359
* REST: Rename ingest.processor.grok to ingest.processor_grok
* REST: Rename remote.info to cluster.remote_info
* REST: Fixed bad YAML comments
* REST: Force dummy scripts to be strings, not numbers
* REST: Fix bad YAML in search/110_field_collapsing.yml
* REST: Adjust percentile tests to work with Perl number handling
Queries that create a scroll context cannot use the cache.
They modify the search context during their execution so using the cache
can lead to duplicate result for the next scroll query.
This change fails the entire request if the request_cache option is explictely set
on a query that creates a scroll context (`scroll=1m`) and make sure internally that we never
use the cache for these queries when the option is not explicitely used.
For 6.x a deprecation log will be printed instead of failing the entire request and the request_cache hint
will be ignored (forced to false).
This change adds a new `_split` API that allows to split indices into a new
index with a power of two more shards that the source index. This API works
alongside the `_shrink` API but doesn't require any shard relocation before
indices can be split.
The split operation is conceptually an inverse `_shrink` operation since we
initialize the index with a _syntetic_ number of routing shards that are used
for the consistent hashing at index time. Compared to indices created with
earlier versions this might produce slightly different shard distributions but
has no impact on the per-index backwards compatibility. For now, the user is
required to prepare an index to be splittable by setting the
`index.number_of_routing_shards` at index creation time. The setting allows the
user to prepare the index to be splittable in factors of
`index.number_of_routing_shards` ie. if the index is created with
`index.number_of_routing_shards: 16` and `index.number_of_shards: 2` it can be
split into `4, 8, 16` shards. This is an intermediate step until we can make
this the default. This also allows us to safely backport this change to 6.x.
The `_split` operation is implemented internally as a DeleteByQuery on the
lucene level that is executed while the primary shards execute their initial
recovery. Subsequent merges that are triggered due to this operation will not be
executed immediately. All merges will be deferred unti the shards are started
and will then be throttled accordingly.
This change is intended for the 6.1 feature release but will not support pre-6.1
indices to be split unless these indices have been shrunk before. In that case
these indices can be split backwards into their original number of shards.