Commit Graph

4681 Commits

Author SHA1 Message Date
zhenxianyimeng 8e96e5c936
Use CollectionUtils.isEmpty where appropriate (#55910)
This commit uses the isEmpty utility method for arrays in place of null and greater than zero checks.
2020-05-11 09:55:57 -07:00
Martijn van Groningen 32471abc0e
Check whether data stream feature flag is enabled before deleting all data streams, (#56517) (#56520)
this will fix the release build.
2020-05-11 18:34:50 +02:00
Martijn van Groningen 9ae09570d8
Allow a number of broadcast transport actions to resolve data streams (#55726) (#56502)
Change TransportBroadcastByNodeAction and TransportBroadcastReplicationAction
to be able to resolve data streams by default. Implementations can change this ability.

This change allows to following APIs to resolve data streams: flush,
refresh (already supported data streams), force merge, clear indices cache,
indices stats (already supported data streams), segments, upgrade stats, 
upgrade, validate query, searchable snapshots stats, clear searchable snapshots cache and
reload analyzers APIs.

Relates to #53100
2020-05-11 12:48:35 +02:00
Nik Everett 2823300bdf
Speed up rounding in auto_date_histogram (#56384) (#56486)
This wires `auto_date_histogram` into the rounding optimization that I
built in #55559. This is should significantly speed up any
`auto_date_histogram`s with `time_zone`s on them.
2020-05-09 11:37:30 -04:00
Nik Everett 2f38aeb5e2
Save memory when numeric terms agg is not top (#55873) (#56454)
Right now all implementations of the `terms` agg allocate a new
`Aggregator` per bucket. This uses a bunch of memory. Exactly how much
isn't clear but each `Aggregator` ends up making its own objects to read
doc values which have non-trivial buffers. And it forces all of it
sub-aggregations to do the same. We allocate a new `Aggregator` per
bucket for two reasons:

1. We didn't have an appropriate data structure to track the
   sub-ordinals of each parent bucket.
2. You can only make a single call to `runDeferredCollections(long...)`
   per `Aggregator` which was the only way to delay collection of
   sub-aggregations.

This change switches the method that builds aggregation results from
building them one at a time to building all of the results for the
entire aggregator at the same time.

It also adds a fairly simplistic data structure to track the sub-ordinals
for `long`-keyed buckets.

It uses both of those to power numeric `terms` aggregations and removes
the per-bucket allocation of their `Aggregator`. This fairly
substantially reduces memory consumption of numeric `terms` aggregations
that are not the "top level", especially when those aggregations contain
many sub-aggregations. It also is a pretty big speed up, especially when
the aggregation is under a non-selective aggregation like
the `date_histogram`.

I picked numeric `terms` aggregations because those have the simplest
implementation. At least, I could kind of fit it in my head. And I
haven't fully understood the "bytes"-based terms aggregations, but I
imagine I'll be able to make similar optimizations to them in follow up
changes.
2020-05-08 20:38:53 -04:00
Nik Everett bd4b9dd10e
Speed up time interval arounding around dst (backport #56371) (#56396)
When an index spans a daylight savings time transition we can't use our
optimization that rewrites the requested time zone to a fixed time zone
and instead we used to fall back to a java.util.time based rounding
implementation. In #55559 we optimized "time unit" rounding. This
optimizes "time interval" rounding.

The java.util.time based implementation is about 1650% slower than the
rounding implementation for a fixed time zone. This replaces it with a
similar optimization that is only about 30% slower than the fixed time
zone. The java.util.time implementation allocates a ton of short lived
objects but the optimized implementation doesn't. So it *might* end up
being faster than the microbenchmarks imply.
2020-05-08 13:39:27 -04:00
Armin Braun b18d242300
Fix Simulate Template Endpoint Temporary Index Handling (#56406) (#56432)
Use proper facility for creating temporary index service for the simulation
that does not add itself to the `IndicesService` unnecessarily (breaking an assertion about the
internal consistency of the cluster state and the `IndicesService`).

Closes #56298
2020-05-08 18:05:24 +02:00
Martijn van Groningen 83739b5806
Backport: allow cluster health api to resolve data streams (#56425)
Backport of: #56413

Allow cluster health api to resolve data streams and
automatically remove data streams after each test in
test cases extending from `ESIntegTestCase`

Relates to #53100
2020-05-08 17:16:25 +02:00
Tanguy Leroux 8e9b69bfd7
Use snapshot information to build searchable snapshot store MetadataSnapshot (#56289) (#56403)
While investigating possible optimizations to speed up searchable
snapshots shard restores, we noticed that Elasticsearch builds the
list of shard files on local disk in order to compare it with the list of
files contained in the snapshot to restore. This list of files is
materialized with a MetadataSnapshot object whose construction
involves to read the footer checksum of every files of the shard
using Store.checksumFromLuceneFile() method.

Further investigation shows that a MetadataSnapshot object is
also created for other types of operations like building the list of
files to recover in a peer recovery (and primary shard relocation)
or in order to assign a shard to a node. These operations use the
Store.getMetadata(IndexCommit) method to build the list of files
and checksums.

In the case of searchable snapshots building the MetadataSnapshot
object can potentially trigger cache misses, which in turn can
cause the download and the writing in cache of the last range of
the file in order to check the 16 bytes footer. This in turn can
cause more evictions.

Since searchable snapshots already contains the footer information
of every file in BlobStoreIndexShardSnapshot it can directly read the
checksum from it and avoid to use the cache at all to create a
MetadataSnapshot for the operations mentioned above.

This commit adds a shortcut to the
SearchableSnapshotDirectory.openInput() method - similarly to what
already exists for segment infos - so that it creates a specific
IndexInput for checksum reading operation.
2020-05-08 14:16:19 +02:00
Armin Braun 085ff8c404
Add More Trace Logging to BlobStoreRepository (#56336) (#56401)
Adding more trace logging that would be helpful in understanding
the precise order of blob-level operations if needed.
2020-05-08 08:31:32 +02:00
Tal Levy 13944b1bf9 Fix max-int limit for number of points reduced in geo_centroid (#56370)
A bug in InternalGeoCentroid#reduce existed that summed up
the aggregation's long-valued counts into a local integer variable.
Since it is definitely possible to reduce more than Integer.MAX points,
this change simply updates that variable to be a long-valued number.

Closes #55992.
2020-05-07 14:30:29 -07:00
Tal Levy 6e0178fb68
Move CumulativeSumPipelineAgg to use ConstructingObjectParser parsing (#55990) (#56380)
As part of #52776, this refactors the aggregation to use
the context parser to parse its parameters.
2020-05-07 12:34:54 -07:00
Tim Brooks b84d1e2577
Improve logging around SniffConnectionStrategy (#56378)
Currently, the logging around the SniffConnectionStrategy is limited.
The log messages are inconsistent and sometimes wrong. This commit
cleans up these log message to describe when connections are happening
and what failed if a step fails.

Additionally, this commit enables TRACE logging for a problematic test
(testEnsureWeReconnect).
2020-05-07 13:11:56 -06:00
Tim Brooks 9d076364d7
Fix testCollectNodes test assertion (#56294)
Currently when a connection closes a new sniff round begins. The
testCollectNodes test closes four transports before triggering the
method to collect the remote nodes. This leads to a race where there are
a number of reasons the collect nodes call might fail. This commit fixes
that issue by changing the test assertion to include a potential failure
condition.

Fixes #55292.
2020-05-07 11:52:43 -06:00
Nik Everett b5e385fa56
Fix auto_date_histogram interval (#56252) (#56341)
`auto_date_histogram` was returning the incorrect `interval` because
of a combination of two things:
1. When pipeline aggregations rewrote `auto_date_histogram` we reset the
   interval to 1. Oops. Fixed that.
2. *Every* bucket aggregation was rewriting its buckets as though there
   was a pipeline aggregation even if there aren't any. This is a bit
   silly so we skip that too.

Closes #56116
2020-05-07 10:27:40 -04:00
Nhat Nguyen bd0e0f41a0
Ensure unregister child node if failed to register task (#56254)
We fail to unregister the child node in registerAndExecute if the parent
task is being canceled. This leads to a bug where a cancel request never
completes.

Closes #55875
Relates #54312
2020-05-07 10:10:13 -04:00
Nik Everett e35919d3b8
Optimize date_histograms across daylight savings time (backport of #55559) (#56334)
Rounding dates on a shard that contains a daylight savings time transition
is currently something like 1400% slower than when a shard contains dates
only on one side of the DST transition. And it makes a ton of short lived
garbage. This replaces that implementation with one that benchmarks to
having around 30% overhead instead of the 1400%. And it doesn't generate
any garbage per search hit.

Some background:
There are two ways to round in ES:
* Round to the nearest time unit (Day/Hour/Week/Month/etc)
* Round to the nearest time *interval* (3 days/2 weeks/etc)

I'm only optimizing the first one in this change and plan to do the second
in a follow up. It turns out that rounding to the nearest unit really *is*
two problems: when the unit rounds to midnight (day/week/month/year) and
when it doesn't (hour/minute/second). Rounding to midnight is consistently
about 25% faster and rounding to individual hour or minutes.

This optimization relies on being able to *usually* figure out what the
minimum and maximum dates are on the shard. This is similar to an existing
optimization where we rewrite time zones that aren't fixed
(think America/New_York and its daylight savings time transitions) into
fixed time zones so long as there isn't a daylight savings time transition
on the shard (UTC-5 or UTC-4 for America/New_York). Once I implement
time interval rounding the time zone rewriting optimization *should* no
longer be needed.

This optimization doesn't come into play for `composite` or
`auto_date_histogram` aggs because neither have been migrated to the new
`DATE` `ValuesSourceType` which is where that range lookup happens. When
they are they will be able to pick up the optimization without much work.
I expect this to be substantial for `auto_date_histogram` but less so for
`composite` because it deals with fewer values.

Note: My 30% overhead figure comes from small numbers of daylight savings
time transitions. That overhead gets higher when there are more
transitions in logarithmic fashion. When there are two thousand years
worth of transitions my algorithm ends up being 250% slower than rounding
without a time zone, but java time is 47000% slower at that point,
allocating memory as fast as it possibly can.
2020-05-07 09:10:51 -04:00
Armin Braun 3bad5b3c01
Fix Noisy Logging during Snapshot Delete (#56264) (#56329)
We were logging the cleanup of the snap- and meta- blobs for every snapshot delete
which is needlessly noisy and confusing to users. We should only log actual stale/unexpected
blobs here.
2020-05-07 13:48:53 +02:00
Ryan Ernst 33d6a55d1d
Create plugin for internalClusterTest task (#56067)
This commit creates a new gradle plugin to provide a separate task name
and source set for running ESIntegTestCase tests. The only project
converted to use the new plugin in this PR is server, as an example. The
remaining cases in x-pack will be handled in followups.

backport of #55896
2020-05-06 17:20:52 -07:00
Mark Vieira f28a12cbba
Add version 7.9.0
This reverts commit 350e930e
2020-05-06 13:34:57 -07:00
Julie Tibshirani e852bb29b7
Simplify signature of FieldMapper#parseCreateField. (#56144)
`FieldMapper#parseCreateField` accepts the parse context, plus a list of fields
as an output parameter. These fields are immediately added to the document
through `ParseContext#doc()`.

This commit simplifies the signature by removing the list of fields, and having
the mappers add the fields directly to `ParseContext#doc()`. I think this is
nicer for implementors, because previously fields could be added either through
the list, or the context (through `add`, `addWithKey`, etc.)
2020-05-06 11:12:09 -07:00
Rory Hunter 350e930e55 Revert "Add version 7.9.0"
This reverts commit b8b4ebd089.
2020-05-06 14:24:55 +01:00
Rory Hunter b8b4ebd089 Add version 7.9.0 2020-05-06 09:12:14 +01:00
Tanguy Leroux 131a3911eb Replace BlobContainerWrapper by FilterBlobContainer (#56200)
A FilterBlobContainer class was introduced in #55952 and it delegates
 its behavior to a given BlobContainer while allowing to override 
only necessary methods.

This commit replaces the existing BlobContainerWrapper class from 
the test framework with the new FilterBlobContainer from core.
2020-05-06 10:05:43 +02:00
Dan Hermann 6674f14fb3
[7.x] Get index includes parent data stream for backing indices (#56238) 2020-05-05 15:43:42 -05:00
Mark Tozzi 33da086d7b
[7.x] Wire up DiversifiedAggregation (#56145) (#56222) 2020-05-05 13:11:49 -04:00
Tal Levy e4f2c3105d
Add geo_shape support for geotile_grid and geohash_grid (#55966) (#56228)
this commit adds aggregation support for the geo_shape field
type on geo*_grid aggregations.

it introduces a Tiler for both tiles and hashes that enables a new type of
ValuesSource to replace the GeoPoint's CellIdSource. This makes it possible
for the existing Aggregator to be re-used, so no new implementations of
the grid aggregators are added.
2020-05-05 09:54:14 -07:00
Lee Hinman b77c0bbe26
[7.x] Validate V2 templates more strictly (#56170) (#56226)
Backports the following commits to 7.x:
 - Validate V2 templates more strictly (#56170)
2020-05-05 10:34:56 -06:00
Igor Motov 94b349cd18
[7.x] Simplify the ValuesSourceRegistry structure (#56154) (#56197)
Follow up to #55747.
2020-05-05 10:37:02 -04:00
Tanguy Leroux b9636713b1
Searchable Snapshots should respect max_restore_bytes_per_sec (#55952) (#56199)
This commit changes searchable snapshots so that it now respects the 
repository's max_restore_bytes_per_sec setting when it downloads blobs.

Backport of #55952 for 7.x
2020-05-05 15:43:06 +02:00
Jason Tedor c38388c506
Fix compiling in TransportValidateQueryActionTests
This arose after a backport where we do not have the nicities of the
Java 11 diamond operator. This commit fixes it by adding the proper type
parameter.
2020-05-05 07:36:40 -04:00
Jason Tedor 410eb29937
Fix validate query listener invocation bug (#56157)
When the index we are validating a query does not exist, we try to send
back a response letting the client know that the index does not
exist. Yet, we accidentally fallthrough into the case that the
validation failed for some other reason. This means that we end up
notifying the channel twice. Sometimes the notification occurs after the
failure has been written out and the channel closed (so the second
invocation leads to a silent failed to write to a closed channel issue),
and sometimes the response does end up in the channel, creating garbled
responses to the client. This commit fixes that issue by avoiding the
fallthrough.
2020-05-05 07:26:02 -04:00
Nhat Nguyen 60d097e262
Avoid copying file chunks in peer covery (#56072) (#56172)
A follow-up of #55353 to avoid copying file chunks before sending
them to the network layer.

Relates #55353
2020-05-04 23:39:34 -04:00
Ryan Ernst 39ba06cbb2
Add dummy file for new client example snippets location (#56152)
This file is added simply to ensure the new directory exists, so it can
be added to the docs configuration.
2020-05-04 15:48:56 -07:00
Lee Hinman 8fa14b333d
[7.x] Validate non-negative priorities for V2 index templates (#56139) (#56163)
Backports the following commits to 7.x:
 - Validate non-negative priorities for V2 index templates (#56139)
2020-05-04 16:19:13 -06:00
Martijn van Groningen 2ac32db607
Move includeDataStream flag from IndicesOptions to IndexNameExpressionResolver.Context (#56151)
Backport of #56034.

Move includeDataStream flag from an IndicesOptions to IndexNameExpressionResolver.Context
as a dedicated field that callers to IndexNameExpressionResolver can set.

Also alter indices stats api to support data streams.
The rollover api uses this api and otherwise rolling over data stream does no longer work.

Relates to #53100
2020-05-04 22:38:33 +02:00
Lee Hinman 3cefe192a2
[7.x] Remove Index Templates V2 feature flag (#56123) (#56141)
Backports the following commits to 7.x:
 - Remove Index Templates V2 feature flag (#56123)
2020-05-04 13:15:51 -06:00
Martijn van Groningen 6d03081560
Add auto create action (#56122)
Backport of #55858 to 7.x branch.

Currently the TransportBulkAction detects whether an index is missing and
then decides whether it should be auto created. The coordination of the
index creation also happens in the TransportBulkAction on the coordinating node.

This change adds a new transport action that the TransportBulkAction delegates to
if missing indices need to be created. The reasons for this change:

* Auto creation of data streams can't occur on the coordinating node.
Based on the index template (v2) either a regular index or a data stream should be created.
However if the coordinating node is slow in processing cluster state updates then it may be
unaware of the existence of certain index templates, which then can load to the
TransportBulkAction creating an index instead of a data stream. Therefor the coordination of
creating an index or data stream should occur on the master node. See #55377

* From a security perspective it is useful to know whether index creation originates from the
create index api or from auto creating a new index via the bulk or index api. For example
a user would be allowed to auto create an index, but not to use the create index api. The
auto create action will allow security to distinguish these two different patterns of
index creation.
This change adds the following new transport actions:

AutoCreateAction, the TransportBulkAction redirects to this action and this action will actually create the index (instead of the TransportCreateIndexAction). Later via #55377, can improve the AutoCreateAction to also determine whether an index or data stream should be created.

The create_index index privilege is also modified, so that if this permission is granted then a user is also allowed to auto create indices. This change does not yet add an auto_create index privilege. A future change can introduce this new index privilege or modify an existing index / write index privilege.

Relates to #53100
2020-05-04 19:10:09 +02:00
Julie Tibshirani 6b5cf1b031 For constant_keyword, make sure exists query handles missing values. (#55757)
It's possible for a constant_keyword to have a 'null' value before any documents
are seen that contain a value for the field. In this case, no documents have a
value for the field, and 'exists' queries should return no documents.
2020-05-04 09:41:52 -07:00
Armin Braun e8ef44ce78
Allow Bulk Snapshot Deletes to Abort (#56009) (#56111)
Making use of #55773 to simplify snapshot state machine.
1. Deletes with no in-progress snapshot now add the delete entry to the cluster state right away
instead of doing a second CS update after the fist update was a NOOP.
2. If a bulk delete matches in-progress as well as completed snapshots, abort the in-progress snapshot
and then move on to delete from the repository.
2020-05-04 16:21:00 +02:00
Christos Soulios c65f828cb7
[7.x] Histogram field type support for ValueCount and Avg aggregations (#56099)
Backports #55933 to 7.x

Implements value_count and avg aggregations over Histogram fields as discussed in #53285

- value_count returns the sum of all counts array of the histograms
- avg computes a weighted average of the values array of the histogram by multiplying each value with its associated element in the counts array
2020-05-04 13:23:02 +03:00
Armin Braun e01b999ef0
Add Functionality to Consistently Read RepositoryData For CS Updates (#55773) (#56091)
Using optimistic locking, add the ability to run a repository state
update task with a consistent view of the current repository data.
Allows for a follow-up to remove the snapshot INIT state.
2020-05-04 08:13:14 +02:00
Armin Braun 3a64ecb6bf
Allow Deleting Multiple Snapshots at Once (#55474) (#56083)
* Allow Deleting Multiple Snapshots at Once (#55474)

Adds deleting multiple snapshots in one go without significantly changing the mechanics of snapshot deletes otherwise.
This change does not yet allow mixing snapshot delete and abort. Abort is still only allowed for a single snapshot delete by exact name.
2020-05-03 20:30:58 +02:00
David Turner 69f50fe79f Improve same-shard allocation explanations (#56010)
I see occasional confusion about the explanations emitted by the same-shard
allocation decider, particularly amongst new users setting up a single-node
cluster and trying to determine why their cluster has `yellow` health. For
example:

    the shard cannot be allocated to the same node on which a copy of the shard
    already exists

This is technically correct but it's quite a complicated sentence. Also, by
starting with "the shard cannot be allocated" it makes it sound like this is
the problem, whereas in fact this message is a good thing and users should
typically focus their attention elsewhere.

This commit simplifies the wording of these messages and makes them sound more
positive, for example:

    a copy of this shard is already allocated to this node
2020-05-01 10:07:14 +01:00
Mark Tozzi d8eb51ed63
Wire up GeoDistanceAggregation (#55975) (#56042) 2020-04-30 15:43:27 -04:00
Tim Brooks 54dbea6c65
Improve RemoteConnectionManager consistency (#55759)
In order to iterate through remote connections, the remote connection
manager maintains a local cache of connected nodes. Unfortunately this
is difficult in relationship with testing as it is inherently racy in
comparison to the parent connection manager map of connections.

This commit improves the relationship by only returning a cached
connection if it is still registered with the parent. If the connection
is not open, we will go to the slow path of allocating a iterator
directly from the parent.
2020-04-30 12:13:06 -06:00
Igor Motov d8f9df771d
Expose agg usage in Feature Usage API (#55732) (#56048)
Counts usage of the aggs and exposes them on the _nodes/usage/.

Closes #53746
2020-04-30 12:53:36 -04:00
Lee Hinman 3dada1e2d3
[7.x] Handle merging dotted object names when merging V2 template mappings (#55982) (#56041)
Backports the following commits to 7.x:
 - Handle merging dotted object names when merging V2 template mappings (#55982)
2020-04-30 10:51:43 -06:00
Przemko Robakowski 797f63e743
[7.x] Emit deprecation warning if multiple v1 templates match with a new index (#55558) (#56038)
* Emit deprecation warning if multiple v1 templates match with a new index (#55558)

* Emit deprecation warning if multiple v1 templates match with a new index

* DEPRECATION_LOGGER rename
2020-04-30 17:36:17 +02:00
Luca Cavanna fc6422ffcc Consolidate DelayableWriteable (#55932)
This commit includes a number of minor improvements around `DelayableWriteable`: javadocs were expanded and reworded, `get` was renamed to `expand` and `DelayableWriteable` no longer implements `Supplier`. Also a couple of methods are now private instead of package private.
2020-04-30 17:16:58 +02:00