Check it out:
```
$ curl -u elastic:password -HContent-Type:application/json -XPOST localhost:9200/test/_update/foo?pretty -d'{
"dac": {}
}'
{
"error" : {
"root_cause" : [
{
"type" : "x_content_parse_exception",
"reason" : "[2:3] [UpdateRequest] unknown field [dac] did you mean [doc]?"
}
],
"type" : "x_content_parse_exception",
"reason" : "[2:3] [UpdateRequest] unknown field [dac] did you mean [doc]?"
},
"status" : 400
}
```
The tricky thing about implementing this is that x-content doesn't
depend on Lucene. So this works by creating an extension point for the
error message using SPI. Elasticsearch's server module provides the
"spell checking" implementation.
s
We seem to have settled on the `ContextParser` interface for parsing
stuff, mostly because `ObjectParser` implements it. We don't really
*need* the old `Aggregator.Parser` interface any more because it
duplicates `ContextParser` but with the arguments reversed.
This adds support to `AggregationSpec` to declare aggregation parsers
using `ContextParser`. This should integrate cleanly with
`ObjectParser`. It doesn't drop support for `Aggregator.Parser` or
change the plugin intrface at all so it *should* be safe to backport to
7.x. And we can remove `Aggregator.Parser` in a follow up which is only
targeted to 8.0.
We added a new rounding in #50609 that handles offsets to the start and
end of the rounding so that we could support `offset` in the `composite`
aggregation. This starts moving `date_histogram` to that new offset.
* Adds support for geo-bounds filtering in geogrid aggregations (#50002)
It is fairly common to filter the geo point candidates in
geohash_grid and geotile_grid aggregations according to some
viewable bounding box. This change introduces the option of
specifying this filter directly in the tiling aggregation.
This is even more relevant to `geo_shape` where the bounds will restrict
the shape to be within the bounds
this optional `bounds` parameter is parsed in an equivalent fashion to
the bounds specified in the geo_bounding_box query.
* Track Snapshot Version in RepositoryData (#50930)
Add tracking of snapshot versions to RepositoryData to make BwC logic more efficient.
Follow up to #50853
Currently, proxy mode allows a remote cluster connection to be setup by
expecting all open connections to be routed through an intermediate
proxy. The proxy must use some logic to ensure that the connections end
up on the correct remote cluster. One mechanism provided is that the
default distribution TLS implementations will forward the host component
of the configured address to the remote connection using the SNI
extension. This is limiting as it requires that the proxy be configured
in a way that always uses a valid hostname as the proxy address.
Instead, this commit adds an additional setting to allow the server_name
to be configured independently. This allows the proxy address to be
specified as a IP literal, but the server_name specified as an arbitrary
string which still must be a valid hostname. It also decouples the
server_name from the requirement of being a DNS resolvable domain.
Generally speaking, deprecated analysis components in elasticsearch will issue deprecation
warnings when they are first used. However, this means that no warnings are emitted when
indexes are created with deprecated components, and users have to actually index a document
to see warnings. This makes it much harder to see these warnings and act on them at
appropriate times.
This is worse in the case where components throw exceptions on upgrade. In this case, users
will not be aware of a problem until a document is indexed, instead of at index creation time.
This commit adds a new check that pushes an empty string through all user-defined analyzers
and normalizers when an IndexAnalyzers object is built for each index; deprecation warnings
and exceptions are now emitted when indexes are created or opened.
Fixes#42349
* Move metadata storage to Lucene (#50907)
Today we split the on-disk cluster metadata across many files: one file for the metadata of each
index, plus one file for the global metadata and another for the manifest. Most metadata updates
only touch a few of these files, but some must write them all. If a node holds a large number of
indices then it's possible its disks are not fast enough to process a complete metadata update before timing out. In severe cases affecting master-eligible nodes this can prevent an election
from succeeding.
This commit uses Lucene as a metadata storage for the cluster state, and is a squashed version
of the following PRs that were targeting a feature branch:
* Introduce Lucene-based metadata persistence (#48733)
This commit introduces `LucenePersistedState` which master-eligible nodes
can use to persist the cluster metadata in a Lucene index rather than in
many separate files.
Relates #48701
* Remove per-index metadata without assigned shards (#49234)
Today on master-eligible nodes we maintain per-index metadata files for every
index. However, we also keep this metadata in the `LucenePersistedState`, and
only use the per-index metadata files for importing dangling indices. However
there is no point in importing a dangling index without any shard data, so we
do not need to maintain these extra files any more.
This commit removes per-index metadata files from nodes which do not hold any
shards of those indices.
Relates #48701
* Use Lucene exclusively for metadata storage (#50144)
This moves metadata persistence to Lucene for all node types. It also reenables BWC and adds
an interoperability layer for upgrades from prior versions.
This commit disables a number of tests related to dangling indices and command-line tools.
Those will be addressed in follow-ups.
Relates #48701
* Add command-line tool support for Lucene-based metadata storage (#50179)
Adds command-line tool support (unsafe-bootstrap, detach-cluster, repurpose, & shard
commands) for the Lucene-based metadata storage.
Relates #48701
* Use single directory for metadata (#50639)
Earlier PRs for #48701 introduced a separate directory for the cluster state. This is not needed
though, and introduces an additional unnecessary cognitive burden to the users.
Co-Authored-By: David Turner <david.turner@elastic.co>
* Add async dangling indices support (#50642)
Adds support for writing out dangling indices in an asynchronous way. Also provides an option to
avoid writing out dangling indices at all.
Relates #48701
* Fold node metadata into new node storage (#50741)
Moves node metadata to uses the new storage mechanism (see #48701) as the authoritative source.
* Write CS asynchronously on data-only nodes (#50782)
Writes cluster states out asynchronously on data-only nodes. The main reason for writing out
the cluster state at all is so that the data-only nodes can snap into a cluster, that they can do a
bit of bootstrap validation and so that the shard recovery tools work.
Cluster states that are written asynchronously have their voting configuration adapted to a non
existing configuration so that these nodes cannot mistakenly become master even if their node
role is changed back and forth.
Relates #48701
* Remove persistent cluster settings tool (#50694)
Adds the elasticsearch-node remove-settings tool to remove persistent settings from the on
disk cluster state in case where it contains incompatible settings that prevent the cluster from
forming.
Relates #48701
* Make cluster state writer resilient to disk issues (#50805)
Adds handling to make the cluster state writer resilient to disk issues. Relates to #48701
* Omit writing global metadata if no change (#50901)
Uses the same optimization for the new cluster state storage layer as the old one, writing global
metadata only when changed. Avoids writing out the global metadata if none of the persistent
fields changed. Speeds up server:integTest by ~10%.
Relates #48701
* DanglingIndicesIT should ensure node removed first (#50896)
These tests occasionally failed because the deletion was submitted before the
restarting node was removed from the cluster, causing the deletion not to be
fully acked. This commit fixes this by checking the restarting node has been
removed from the cluster.
Co-authored-by: David Turner <david.turner@elastic.co>
* fix tests
Co-authored-by: David Turner <david.turner@elastic.co>
Currently, the connection manager is configured with a default profile
for both the sniff and proxy connection stratgies. This profile
correctly reflects the expected number of connection (6 for sniff, 18
for proxy). This commit removes the proxy strategy usages of the per
connection attempt profile configuration.
Additionally, it refactors other unnecessary code around the connection
manager. The connection manager now can always be built inside the
remote connection.
Currently we reuse the same test connection for all connection attempts
in the testConcurrentConnectsAndDisconnects test. This means that if the
connection fails due to a pre-existing connection, the connection will
be closed impacting the state of all connection attempts. This commit
fixes the test, by returning a unique connection for each attempt.
Fixes#49903.
Follow up to #50692 that starts writing a `min_version` field to
the `RepositoryData` so that pre-7.6 ES versions can not read it
(and potentially corrupt it if they attempt to modify the repo contents)
after the repository moved to the new metadata format.
When deserializing time zones in the Rounding classes we used to include a tiny
normalization step via `DateUtils.of(in.readString())` that was lost in #50609.
Its at least necessary for some tests, e.g. the cause of #50827 is that when
sending the default time zone ZoneOffset.UTC on a stream pre 7.0 we convert it
to a "UTC" string id via `DateUtils.zoneIdToDateTimeZone`. This gets then read
back as a UTC ZoneRegion, which should behave the same but fails the equality
tests in our serialization tests. Reverting to the previous behaviour with an
additional normalization step on 7.x.
Co-authored-by: Nik Everett <nik9000@gmail.com>
Closes#50827
Today we make multiple attempts to corrupt the translog header in
`TranslogHeaderTests#testCurrentHeaderVersion`, but if we are extraordinarily
unlucky then this sequence of corruptions may restore the file to its original
state. This change adjusts the test to only corrupt the file once, which is
certain not to leave the file in its original state.
The test checked queue size and active count, however,
ThreadPoolExecutor pulls out the request from the queue before marking
the worker active, risking that we think all tasks are done when they
are not. Now check on completed-tasks metric instead, which is
guaranteed to be monotonic.
Relates #50769
Today we periodically check the indexing buffer memory every 5 seconds
or after we have used 1/30 of the configured memory. If the total used
memory is over the threshold, then we refresh the "largest" shards. If
refreshing takes longer these intervals (i.e., 5s or 1/30 buffer), then
we continue to enqueue refreshes to these shards. This leads to two
issues:
- The refresh thread pool can be exhausted and other shards can't refresh
- Execute too many refreshes for the "largest" shards
With this change, we only refresh the largest shards if they are not refreshing.
Here we rely on the periodic check to trigger another refresh if needed. We can
harden this by making the ongoing refresh triggers the memory check when
it's completed. I opted out this option in this PR for simplicity.
See: https://discuss.elastic.co/t/write-queue-continue-to-rise/213652/
When a composite aggregation is reduced using the results from an
index that has one of the fields unmapped we were throwing away the
formatter. This is mildly annoying, except in the case of IP addresses
which were coming out as non-utf-8-characters. And tripping assertions.
This carefully preserves the formatter from the working bucket.
Closes#50600
This change fixes the upgrade of index metadata that contain
a custom similarity with options that are not compatible with BM25.
The upgrade doesn't need a real similarity service so we fake one that
resolves all custom similarity to BM25 but this logic fails because the
BM25 provider checks that all options are compatible. This commit removes
the verification step as it is not needed during the upgrade (the verification
is done when the index is restored/opened).
Closes#50763
* Fix Snapshot Shard Status Request Deduplication
The request deduplication didn't actually work for these requests
since they had no `equals` and `hashCode` so the deduplicator wouldn't
actually recognize equal requests.
Replaces the "funny"
`Function<String, ConstructingObjectParser<T, Void>>` with a much
simpler `ConstructingObjectParser<T, String>`. This makes pretty much
all of our object parsers static.
* Fix Snapshot Repository Corruption in Downgrade Scenarios (#50692)
This PR introduces test infrastructure for downgrading a cluster while interacting with a given repository.
It fixes the fact that repository metadata in the new format could be written while there's still older snapshots in the repository that require the old-format metadata to be restorable.
A very large number of recursive calls can cause a stack overflow
exception. This commit forks the recursive calls for non-async
processors. Once forked, each thread will handle at most 10
recursive calls to help keep the stack size and thread count
down to a reasonable size.
Adds support for the `offset` parameter to the `date_histogram` source
of composite aggs. The `offset` parameter is supported by the normal
`date_histogram` aggregation and is useful for folks that need to
measure things from, say, 6am one day to 6am the next day.
This is implemented by creating a new `Rounding` that knows how to
handle offsets and delegates to other rounding implementations. That
implementation doesn't fully implement the `Rounding` contract, namely
`nextRoundingValue`. That method isn't used by composite aggs so I can't
be sure that any implementation that I add will be correct. I propose to
leave it throwing `UnsupportedOperationException` until I need it.
Closes#48757
Previously, the following situation would throw an error:
* A search contains a `collapse` on a particular field.
* The search spans multiple indices, and in one index the field is mapped as a
concrete field, but in another it is a field alias.
The error occurs when we attempt to merge `CollapseTopFieldDocs` across shards.
When merging, we validate that the name of the collapse field is the same across
shards. But the name has already been resolved to the concrete field name, so it
will be different on shards where the field was mapped as an alias vs. shards
where it was a concrete field.
This PR updates the collapse field name in `CollapseTopFieldDocs` to the
original requested field, so that it will always be consistent across shards.
Note that in #32648, we already made a fix around collapsing on field aliases.
However, we didn't test this specific scenario where the field was mapped as an
alias in only one of the indices being searched.
Currently, if an updateable synonym filter is included in a multiplexer filter,
it is not reloaded via the _reload_search_analyzers because the multiplexer
itself doesn't pass on the analysis mode of the filters it contains, so its not
recognized as "updateable" in itself. Instead we can check and merge the
AnalysisMode settings of all filters in the multiplexer and use the resulting
mode (e.g. search-time only) for the multiplexer itself, thus making any synonym
filters contained in it reloadable. This, of course, will also make the
analyzers using the multiplexer be usable at search-time only.
Closes#50554
strict_date_optional_time changes to have optional minute part.
It already allowed optional second and fraction of second part.
This allows parsing 2018-01-01T00+01 , 2018-01-01T00:00+01 , 2018-01-01T00:00:00+01 , 2018-01-01T00:00:00.000+01
It won't allow parsing a timezone without an hour part as this is not allowed by iso8601 spec
closes#49351
ElasticsearchException.guessRootCauses would return wrapper exception if
inner exception was not an ElasticsearchException. Fixed to never return
wrapper exceptions.
At least following APIs change root_cause.0.type as a result:
_update with bad script
_index with bad pipeline
Relates #50417
This PR adds per-field metadata that can be set in the mappings and is later
returned by the field capabilities API. This metadata is completely opaque to
Elasticsearch but may be used by tools that index data in Elasticsearch to
communicate metadata about fields with tools that then search this data. A
typical example that has been requested in the past is the ability to attach
a unit to a numeric field.
In order to not bloat the cluster state, Elasticsearch requires that this
metadata be small:
- keys can't be longer than 20 chars,
- values can only be numbers or strings of no more than 50 chars - no inner
arrays or objects,
- the metadata can't have more than 5 keys in total.
Given that metadata is opaque to Elasticsearch, field capabilities don't try to
do anything smart when merging metadata about multiple indices, the union of
all field metadatas is returned.
Here is how the meta might look like in mappings:
```json
{
"properties": {
"latency": {
"type": "long",
"meta": {
"unit": "ms"
}
}
}
}
```
And then in the field capabilities response:
```json
{
"latency": {
"long": {
"searchable": true,
"aggreggatable": true,
"meta": {
"unit": [ "ms" ]
}
}
}
}
```
When there are no conflicts, values are arrays of size 1, but when there are
conflicts, Elasticsearch includes all unique values in this array, without
giving ways to know which index has which metadata value:
```json
{
"latency": {
"long": {
"searchable": true,
"aggreggatable": true,
"meta": {
"unit": [ "ms", "ns" ]
}
}
}
}
```
Closes#33267
Introduce a new static setting, `gateway.auto_import_dangling_indices`, which prevents dangling indices from being automatically imported. Part of #48366.
In security we currently monitor a set of files for changes:
- config/role_mapping.yml (or alternative configured path)
- config/roles.yml
- config/users
- config/users_roles
This commit prevents unnecessary reloading when the file change actually doesn't change the internal structure.
Backport of: #50207
Co-authored-by: Anton Shuvaev <anton.shuvaev91@gmail.com>
We *very* commonly have object with ctors like:
```
public Foo(String name)
```
And then declare a bunch of setters on the object. Every aggregation
works like this, for example. This change teaches `ObjectParser` how to
build these aggregations all on its own, without any help. This'll make
it much cleaner to parse aggs, and, probably, a bunch of other things.
It'll let us remove lots of wrapping. I've used this new power for the
`avg` aggregation just to prove that it works outside of a unit test.
If an auto-refresh happens, then version_map_memory is reset to 0. By
default, the auto-refresh occurs for every second in the first 30
seconds until search becomes idle.
Closes#50362
In 7.x an internal API used for validating remote cluster does not throw, see #50420 for the
details. This change implements a workaround for remote cluster validation, only for 7.x branches.
fixes#50420
A failure of a recovering shard can race with a new allocation of the shard, and cause the new
allocation to be failed as well. This can result in a shard being marked as initializing in the cluster
state, but not exist on the node anymore.
Closes#50508
This change removes a no longer used method, `fromByte`, in
IndicesOptions. This method was necessary for backwards compatibility
with versions prior to 6.4.0 and was used when talking to those
versions. However, the minimum wire compatibility version has changed
and we no longer use this code.
Backport of #50665
This replaces the hand written xcontent parsers for significance
heristics with `ObjectParser` and parsing named xcontent.
As a happy accident, this was the last user of `ParseFieldRegistry` so
this PR entirely removes that class.
Closes#25519
Currently, we use delayed address resolution in the proxy strategy tests
to allow tests to connect to different addresses. Unfortunately, this
has the potential to introduce races as the address is resolved each
connection attempt. The number of connection attempts can vary based on
when connections are opening and closing. This commit modifies the test
be allowing them to specifically control which address is used.
Related to #50618
This test code fixes a serialization test bug:
https://gradle-enterprise.elastic.co/s/7x2ct6yywkw3o
Rarely stats for the same processor are generated and
the production code then sums up these stats. However
the test code wasn't summing up in that case,
which caused inconsistencies between the actual and expected results.
Closes#50507