Commit Graph

4834 Commits

Author SHA1 Message Date
Albert Zaharovits c57ccd99f7
Just log 401 stacktraces (#55774)
Ensure stacktraces of 401 errors for unauthenticated users are logged
but not returned in the response body.
2020-06-10 20:39:32 +03:00
Armin Braun 85f5c4192b
Improve Test Coverage for Old Repository Metadata Formats (#57915) (#57922)
Use the the hack used in `CorruptedBlobStoreRepositoryIT` in more snapshot
failure tests to verify that BwC repository metadata is handled properly
in these so far not-test-covered scenarios.
Also, some minor related dry-up of snapshot tests.

Relates #57798
2020-06-10 13:27:01 +02:00
Yannick Welsch 80f221e920
Use clean thread context for transport and applier service (#57792) (#57914)
Adds assertions to Netty to make sure that its threads are not polluted by thread contexts (and
also that thread contexts are not leaked). Moves the ClusterApplierService to use the system
context (same as we do for MasterService), which allows to remove a hack from
TemplateUgradeService and makes it clearer that applying CS updates is fully executing under
system context.
2020-06-10 10:30:28 +02:00
Armin Braun fe85bdbe6f
Fix Remote Recovery Being Retried for Removed Nodes (#57608) (#57913)
If a node is disconnected we retry. It does not make sense
to retry the recovery if the node is removed from the cluster though.
=> added a CS listener that cancels the recovery for removed nodes

Also, we were running the retry on the `SAME` pool which for each retry will
be the scheduler pool. Since the error path of the listener we use here
will do blocking operations when closing the resources used by the recovery
we can't use the `SAME` pool here since not all exceptions go to the `ActionListenerResponseHandler`
threading like e.g. `NodeNotConnectedException`.

Closes #57585
2020-06-10 09:41:52 +02:00
Armin Braun d579420452
Stop Serializing Exceptions in SnapshotInfo (#57866) (#57898)
In ff9e8c622427d42a2d87b4ceb298d043ae3c4e6a we changed the format
used when serializing snapshot failures in the cluster state and
`SnapshotInfo`. This turned them from a short string holding all the
nested exception messages into a multi kb stacktrace in many cases.
This is not great if you snapshot a large number of shards that all fail
for example and massively blows up the size of the GET snapshots response
if there are snapshots with failures in there.
This change reverts to the format used for exceptions before the above commit.

Also, this change short circuits logging and serialization of the failure
for an aborted snapshot where we don't care about the specific message at all
and aligns the message to "aborted" in all cases (current if we aborted before any IO,
it would have been "aborted" and an exception when aborting later during IO).
2020-06-10 08:41:03 +02:00
Gordon Brown aab6317260
[7.x] Include hidden indices in snapshots by default (#57325)
Previously, hidden indices were not included in snapshots by default, unless
specified using one of the usual methods for doing so: naming indices directly,
using index patterns starting with a ., or specifying expand_wildcards to
a value that includes hidden (e.g. all or hidden,open).

This commit changes the default expand_wildcards value to include hidden
indices.
2020-06-09 16:01:52 -06:00
Yannick Welsch 9eec819c5b Revert "Use clean thread context for transport and applier service (#57792)"
This reverts commit 259be236cf.
2020-06-09 22:24:54 +02:00
Yannick Welsch 8199956937 Revert "Assert on request headers only (#57792)"
This reverts commit b5d3565214.
2020-06-09 22:24:35 +02:00
Henning Andersen 1e8e115ae1 Rollover avoid heavy lifting in dry-run/validation (#57894)
Fixed two newly introduced issues with rollover:
1. Using auto-expand replicas, rollover could result in unexpected log
messages on future indexes.
2. It did a reroute and other heavy work on the network thread.

Closes #57706
Supersedes #57865
Relates #53965
2020-06-09 22:07:30 +02:00
Jake Landis fff0a106c9
[7.x] Support `if_seq_no` and `if_primary_term` for ingest (#55430) (#57768)
Allow for optimistic concurrency control during ingest by checking the
sequence number and primary term. This is accomplished by defining
_if_seq_no and _if_primary_term in the pipeline, similarly to _version
and _version_type.

Closes #41255
Co-authored-by: Maria Ralli <mariai.ralli@gmail.com>
2020-06-09 14:20:26 -05:00
Andrei Dan 3945712c72
[7.x] ILM add data stream support to the Shrink action (#57616) (#57884)
The shrink action creates a shrunken index with the target number of shards.
This makes the shrink action data stream aware. If the ILM managed index is
part of a data stream the shrink action will make sure to swap the original
managed index with the shrunken one as part of the data stream's backing
indices and then delete the original index.

(cherry picked from commit 99aeed6acf4ae7cbdd97a3bcfe54c5d37ab7a574)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
2020-06-09 19:45:22 +01:00
Jim Ferenczi ea696198e9 Fix rounding composite aggs on sorted index (#57867)
This commit fixes a bug on the composite aggregation when the index
is sorted and the primary composite source needs to round values (date_histo).
In such case, we cannot take into account the subsequent sources even if they
match the index sort because the rounding of the primary sort value may break
the original index order.

Fixes #57849
2020-06-09 20:41:45 +02:00
Nik Everett 44a79d1739
Deprecte Rounding#round (#57845) (#57893)
This deprecates `Rounding#round` and `Rounding#nextRoundingValue` in
favor of calling
```
Rounding.Prepared prepared = rounding.prepare(min, max);
...
prepared.round(val)

```

because it is always going to be faster to prepare once. There
are going to be some cases where we won't know what to prepare *for*
and in those cases you can call `prepareForUnknown` and stil be faster
than calling the deprecated method over and over and over again.

Ultimately, this is important because it doesn't look like there is an
easy way to cache `Rounding.Prepared` or any of its precursors like
`LocalTimeOffset.Lookup`. Instead, we can just build it at most once per
request.

Relates to #56124
2020-06-09 14:30:56 -04:00
Tim Brooks 2630c80b5d
Fix IndexRecoveryIT transient error test (#57826)
Currently it is possible for a transient network error to disrupt the
start recovery request from the remote to source node. This disruption
is racy with the recovery occurring on the source node. It is possible
for the source node to finish and clear its recovery. When this occurs,
the recovery cannot be reestablished and the "no two start" assertion
is tripped. This commit fixes this issue by allowing two starts if the
finalize request has been received.

Fixes #57416.
2020-06-09 10:49:38 -06:00
Tim Brooks 8119b96517
Fix stalled send translog ops request (#57859)
Currently, the translog ops request is reentrent when there is a mapping
update. The impact of this is that a translog ops ends up waiting on the
pre-existing listener and it is never completed. This commit fixes this
by introducing a new code path to avoid the idempotency logic.
2020-06-09 10:46:34 -06:00
Tim Brooks c17121428e
Fix translog ops action name in channel listener (#57854)
The action name is passed to the `ChannelListener` and is used for
logging purposes. Currently, we are using the incorrect action name for
the translog ops listener. This commit fixes the issue.
2020-06-09 10:38:58 -06:00
Lee Hinman cb2ce3736a
[7.x] Make noop template updates be cluster state noops (#57851) (#57880)
Backports the following commits to 7.x:

    Make noop template updates be cluster state noops (#57851)
2020-06-09 09:26:06 -06:00
Dan Hermann b501b282f8
Change default backing index naming scheme 2020-06-09 09:31:34 -05:00
Nik Everett e7cc2448d2
Save memory when string terms are not on top (#57758) (#57876)
This reworks string flavored implementations of the `terms` aggregation
to save memory when it is under another bucket by dropping the usage of
`asMultiBucketAggregator`.
2020-06-09 10:26:29 -04:00
Yannick Welsch b5d3565214 Assert on request headers only (#57792)
Only assert that actual request headers are empty, as default headers might still be there when stashing the context.
2020-06-09 14:08:25 +02:00
Yannick Welsch 259be236cf Use clean thread context for transport and applier service (#57792)
Adds assertions to Netty to make sure that its threads are not polluted by thread contexts (and
also that thread contexts are not leaked). Moves the ClusterApplierService to use the system
context (same as we do for MasterService), which allows to remove a hack from
TemplateUgradeService and makes it clearer that applying CS updates is fully executing under
system context.
2020-06-09 12:32:28 +02:00
Tim Brooks 9eaee3da8d
Fix exception check in RecoveryRequestTrackerTests (#57493)
Currently we check that exceptions are the same in the recovery request
tracker test. This is inconsistent because the future wraps the
exception in a new instance. This commit fixes the test by comparing a
random exception message.

Fixes #57199
2020-06-08 15:42:48 -06:00
Lee Hinman fe2eaf0d03
[7.x] Throw exception on duplicate mappings metadata fields (#57839)
In #57701 we changed mappings merging so that duplicate fields specified in mappings caused an
exception during validation. This change makes the same exception thrown when metadata fields are
duplicated. This will allow us to be strict currently with plans to make the merging more
fine-grained in a later release.
2020-06-08 14:21:18 -06:00
Tim Brooks 952cf770ed
Reestablish peer recovery after network errors (#57827)
Currently a network disruption will fail a peer recovery. This commit
adds network errors as retryable actions for the source node.
Additionally, it adds sequence numbers to the recovery request to
ensure that the requests are idempotent.

Additionally it adds a reestablish recovery action. The target node
will attempt to reestablish an existing recovery after a network
failure. This is necessary to ensure that the retries occurring on the
source node provide value in bidirectional failures.
2020-06-08 14:17:52 -06:00
Lee Hinman 6e8cf0973f
[7.x] Disallow merging existing mapping field definitions in templates (#57701) (#57822)
Backports the following commits to 7.x:

    Disallow merging existing mapping field definitions in templates (#57701)
2020-06-08 12:56:09 -06:00
Armin Braun 0987c0a5f3
Fix Broken Numeric Shard Generations in RepositoryData (#57813) (#57821)
Fix broken numeric shard generations when reading them from the wire
or physically from the physical repository.
This should be the cheapest way to clean up broken shard generations
in a BwC and safe-to-backport manner for now. We can potentially
further optimize this by also not doing the checks on the generations
based on the versions we see in the `RepositoryData` but I don't think
it matters much since we will read `RepositoryData` from cache in almost
all cases.

Closes #57798
2020-06-08 18:36:56 +02:00
Nik Everett ee0ce8ffaf
Fix a bug with missing fields in sig_terms (#57757)
When you run a `significant_terms` aggregation on a field and it *is*
mapped but there aren't any values for it then the count of the
documents that match the query on that shard still have to be added to
the overall doc count. I broke that in #57361. This fixes that.

Closes #57402
2020-06-08 10:07:14 -04:00
Mayya Sharipova 70e63a365a
Refactor how to determine if a field is metafield (#57378) (#57771)
Before to determine if a field is meta-field, a static method of MapperService
isMetadataField was used. This method was using an outdated static list
of meta-fields.

This PR instead changes this method to the instance method that
is also aware of meta-fields in all registered plugins.

Related #38373, #41656
Closes #24422
2020-06-08 09:16:18 -04:00
Andrei Dan 1b84e93d83
[7.x] DataStream creation validation allows for prefixed indices (#57750) (#57799)
We want to validate the DataStreams on creation to make sure the future backing
indices would not clash with existing indices in the system (so we can
always rollover the data stream).
This changes the validation logic to allow for a DataStream to be created
with a backing index that has a prefix (eg. `shrink-foo-000001`) even if the
former backing index (`foo-000001`) exists in the system.
The new validation logic will look for potential index conflicts with indices
in the system that have the counter in the name greater than the data stream's
generation.

This ensures that the `DataStream`'s future rollovers are safe because for a
`DataStream` `foo` of generation 4, we will look for standalone indices in the
form of `foo-%06d` with the counter greater than 4 (ie. validation will fail if
`foo-000006` exists in the system), but will also allow replacing a
backing index with an index named by prefixing the backing index it replaces.

(cherry picked from commit 695b242d69f0dc017e732b63737625adb01fe595)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
2020-06-08 13:31:52 +01:00
Armin Braun 004eb8bd7e
Fix Bug With RepositoryData Caching (#57785) (#57800)
* Fix Bug With RepositoryData Caching

This fixes a really subtle bug with caching `RepositoryData`
that can corrupt a repository.
We were caching `RepositoryData` serialized in the newest
metadata format. This lead to a confusing situation where
numeric shard generations would be cached in `ShardGenerations`
that were not written to the repository because the repository
or cluster did not yet support `ShardGenerations`.
In the case where shard generations are not actually supported yet,
these cached numeric generations are not safe and there's multiple
scenarios where they would be incorrect, leading to the repository
trying to read shard level metadata from index-N that don't exist.
This commit makes it so that cached metadata is always in the same
format as the metadata in the repository.

Relates #57798
2020-06-08 13:16:45 +02:00
Luca Cavanna 7a06a13d99 Add description to submit and get async search, as well as cancel tasks (#57745)
This makes it easier to debug where such tasks come from in case they are returned from the get tasks API.

Also renamed the last occurrence of waitForCompletion to waitForCompletionTimeout in get async search request.
2020-06-08 11:17:29 +02:00
Armin Braun 619e4f8c02
Make BackgroundIndexer more Efficient (#57781) (#57789)
Improve efficiency of background indexer by allowing to add
an assertion for failures while they are produced to prevent
queuing them up.
Also, add non-blocking stop to the background indexer so that when
stopping multiple indexers we don't needlessly continue indexing
on some indexers while stopping another one.

Closes #57766
2020-06-08 10:18:47 +02:00
Nik Everett 3b1dfa3b5d
Remove deprecated wrapped from scripted_metric (backport of #57627) (#57763)
This removes the deprecated `asMultiBucketAggregator` wrapper from
`scripted_metric`. Unlike most other such removals, this isn't likely to
save much memory. But it does make the internals of the aggregator
slightly less twisted.

Relates to #56487
2020-06-05 16:14:28 -04:00
Martijn van Groningen f170b52e64
Backing indices should use composable template matching with the corresponding data stream name (#57728)
Backport of #57640 to 7.x branch.

Composable templates with exact matches, can match with the data stream name, but not with the backing index name.
Also if the backing index naming scheme changes, then a composable template may never match with a backing index.

In that case mappings and settings may not get applied.
2020-06-05 18:38:22 +02:00
Dan Hermann 3fe93e24a6
[7.x] Prohibit closing the write index for a data stream (#57740) 2020-06-05 11:14:43 -05:00
Jake Landis 459ab9a0b2
[7.x] Ensure type exists for all monitoring configuration (#57399) (#57704)
#47711 and #47246 helped to validate that monitoring settings are
rejected at time of setting the monitoring settings. Else an invalid
monitoring setting can find it's way into the cluster state and result
in an exception thrown [1] on the cluster state application (there by
causing significant issues). Some additional monitoring settings have
been identified that can result in invalid cluster state that also
result in exceptions thrown on cluster state application.

All settings require a type of either http or local to be
applicable. When a setting is changed, the exporters are automatically
updated with the new settings. However, if the old or new settings lack
of a type setting an exception will be thrown (since exporters are
always of type 'http' or 'local'). Arguably we shouldn't blindly create
and destroy new exporters on each monitoring setting update, but the
lifecycle of the exporters is abit out the scope this PR is trying to
address.

This commit introduces a similar methodology to check for validity as
#47711 and #47246 but this time for ALL (including non-http) settings.
Monitoring settings are not useful unless there an exporter with a type
defined. The type is used as dependent setting, such that it must
exist to set the value. This ensures that when any monitoring settings
changes that they can only get added to cluster state if the type
exists. If the type exists (and the other validations pass) then the
exporters will get re-built and the cluster state remains valid.

Tests have been included to ensure that all dynamic monitoring settings
have the type as dependent settings.

[1]
org.elasticsearch.common.settings.SettingsException: missing exporter type for [found-user-defined] exporter
at org.elasticsearch.xpack.monitoring.exporter.Exporters.initExporters(Exporters.java:126) ~[?:?]
2020-06-05 10:47:11 -05:00
Tanguy Leroux 0e57528d5d Remove more //NORELEASE (#57517)
We agreed on removing the following //NORELEASE tags.
2020-06-05 15:34:06 +02:00
Gordon Brown 5a4e5a1e9d
Handle `cluster.max_shards_per_node` in YAML config (#57234)
Prior to this commit, `cluster.max_shards_per_node` is not correctly handled
when it is set via the YAML config file, only when it is set via the Cluster
Settings API.

This commit refactors how the limit is implemented, both to enable correctly
handling the setting in the YAML and to more effectively centralize the logic
used to enforce the limit. The logic used to apply the limit, as well as the
setting value, has been moved to the new `ShardLimitValidator`.
2020-06-04 14:02:21 -06:00
Nik Everett 98c379c507
Merge remaining sig_terms into terms (#57397) (#57687)
Merges the remaining implementation of `significant_terms` into `terms`
so that we can more easilly make them work properly without
`asMultiBucketAggregator` which *should* save memory and speed them up.

Relates #56487
2020-06-04 14:32:32 -04:00
Mark Vieira 9b0f5a1589
Include vendored code notices in distribution notice files (#57017) (#57569)
(cherry picked from commit 627ef279fd29f8af63303bcaafd641aef0ffc586)
2020-06-04 10:34:24 -07:00
Armin Braun 80d1b12fa3
Restore ThreadContext after Serializing OutboundMessage (#57659) (#57681)
Stash the current context before restoring the stored context on the IO thread
so that its thread context does not get polluted.

Closes #57554
2020-06-04 17:55:26 +02:00
David Turner fc4dd6d681 Timeout health API on busy master (#57587)
Today `GET _cluster/health?wait_for_events=...&timeout=...` will wait
indefinitely for the master to process the pending cluster health task,
ignoring the specified timeout. This could take a very long time if the master
is overloaded. This commit fixes this by adding a timeout to the pending
cluster health task.
2020-06-04 13:39:22 +01:00
William Brafford 7de6d97363
Version bump for 7.7.1 release (#57619) 2020-06-03 16:38:25 -04:00
Igor Motov 8d7f389f3a
Increase search.max_buckets to 65,535 (#57042)
Increases the default search.max_buckets limit to 65,535, and only counts
buckets during reduce phase.

Closes #51731
2020-06-03 15:35:41 -04:00
Julie Tibshirani e0a15e8dc4
Remove the 'array value parser' marker interface. (#57571) (#57622)
This PR replaces the marker interface with the method
FieldMapper#parsesArrayValue. I find this cleaner and it will help with the
fields retrieval work (#55363).

The refactor also ensures that only field mappers can declare they parse array
values. Previously other types like ObjectMapper could implement the marker
interface and be passed array values, which doesn't make sense.
2020-06-03 11:30:14 -07:00
Nik Everett 7fd94f7d0f Test: Protect auto_date_histo from 0 buckets
The test for `auto_date_histogram` as trying to round `Long.MAX_VALUE`
if there were 0 buckets. That doesn't work.

Also, this replaces all of the class variables created to make
consistent random result when testing `InternalAutoDateHistogram` with
the newer `randomResultsToReduce` which is a little simpler to
understand.
2020-06-03 12:51:22 -04:00
Christos Soulios 67abde326e
[7.x] Introduce v6.8.11 (#57600) 2020-06-03 19:10:16 +03:00
Nhat Nguyen 5097071230 Increase timeout for GlobalCheckpointSyncIT (#57567)
The test failed when it was running with 4 replicas and 3 indexing 
threads. The recovering replicas can prevent the global checkpoint from
advancing. This commit increases the timeout to 60 seconds for this
suite and the check for no inflight requests.

Closes #57204
2020-06-03 08:50:02 -04:00
Nik Everett 2a27c411fb
Same memory when geo aggregations are not on top (#57483) (#57551)
Saves memory when the `geotile_grid` and `geohash_grid` are not on the
top level by using the `LongKeyedBucketOrds` we built in #55873.
2020-06-02 16:21:50 -04:00
Zachary Tong 79ac69cfa3
[7.x Backport] Prevent SigTerms/SigText from running on fields they do not support (#57485)
SigTerms cannot run on fields that are not searchable, and SigText
cannot run on fields that do not have analyzers.  Both of these
situations fail today with an esoteric exception, so this just formalizes
the constraint by throwing an IllegalArgumentException up front.

In practice, the only affected field seems to be the `binary` field,
which is neither searchable or has a default analyzer (e.g. even numeric
and keyword fields have a default analyzer despite not being tokenized)

Adds supported-type tests, and makes some changes to the test itself
to allow testing sigtext (indexing _source).

Also a few tweaks to the test to avoid bad randomization (negative
numbers, etc).
2020-06-02 16:03:37 -04:00