Commit Graph

4718 Commits

Author SHA1 Message Date
Alan Woodward d33d13f2be Simplify generics on Mapper.Builder (#56747)
Mapper.Builder currently has some complex generics on it to allow fluent builder
construction. However, the second parameter, a return type from the build() method,
is unnecessary, as we can use covariant return types. This commit removes this second
generic parameter.
2020-05-15 12:14:49 +01:00
Ryan Ernst 9fb80d3827
Move publishing configuration to a separate plugin (#56727)
This is another part of the breakup of the massive BuildPlugin. This PR
moves the code for configuring publications to a separate plugin. Most
of the time these publications are jar files, but this also supports the
zip publication we have for integ tests.
2020-05-14 20:23:07 -07:00
Tal Levy 5e90ff32f7
Add Normalize Pipeline Aggregation (#56399) (#56792)
This aggregation will perform normalizations of metrics
for a given series of data in the form of bucket values.

The aggregations supports the following normalizations

- rescale 0-1
- rescale 0-100
- percentage of sum
- mean normalization
- z-score normalization
- softmax normalization

To specify which normalization is to be used, it can be specified
in the normalize agg's `normalizer` field.

For example:

```
{
  "normalize": {
    "buckets_path": <>,
    "normalizer": "percent"
  }
}
```
2020-05-14 17:40:15 -07:00
Lee Hinman a73d7d9e2b
[7.x] Don't allow invalid template combinations (#56397) (#56795)
Backports the following commits to 7.x:

- Don't allow invalid template combinations (#56397)
2020-05-14 16:20:53 -06:00
Mark Tozzi b718193a01
Clean up DocValuesIndexFieldData (#56372) (#56684) 2020-05-14 12:42:37 -04:00
Nhat Nguyen 044ee380e8 Use ConcurrentSet in testTrackingChannelTask (#56775)
We need to use a ConcurrentSet to track the canceled tasks
as cancelTaskAndDescendants can be called concurrently.

Closes #56746
2020-05-14 12:22:59 -04:00
David Turner f0c2c25527 AwaitsFix for #56746 (and #56751) 2020-05-14 12:46:32 +01:00
David Turner 63cc53e512 AwaitsFix for #56757 2020-05-14 12:00:15 +01:00
Martijn van Groningen b87aeb09f7
Allow more apis to resolve data streams (#56743)
Backporting #56683 to 7.x branch.

Allow get settings, cluster state and field caps apis to resolve data streams.
2020-05-14 10:57:13 +02:00
Nhat Nguyen ac432f6612 Reduce test load in TaskManagerTests 2020-05-13 23:52:48 -04:00
Nhat Nguyen 566b23c42c
Cancel task and descendants on channel disconnects (#56620)
If a channel gets disconnected, then we should cancel the tasks
associated with that channel as their results won't be retrieved.

Closes #56327
Relates #56619

Backport of #56620
2020-05-13 22:09:58 -04:00
Jason Tedor 7c8860b7e6
Update number of replicas when removing setting (#56723)
We previously rejected removing the number of replicas setting, which
prevents users from reverting this setting to its default the natural
way. To fix this, we put back the setting with the default value in the
cases that the user is trying to remove it. Yet, we also need to do the
work of updating the routing table and so on appropriately. This case
was missed because when the setting is being removed, we were defaulting
to -1 in this code path, which is treated as not being updated. Instead,
we must treat the case when we are removing this setting as if the
setting is being updated, too. This commit does that.
2020-05-13 20:13:25 -04:00
David Roberts ab40466bfb
Prevent unexpected native controller output hanging the process (#56685)
In normal operation native controllers are not expected to write
anything to stdout or stderr.  However, if due to an error or
something unexpected with the environment a native controller
does write something to stdout or stderr then it will block if
nothing is reading that output.

This change makes the stdout and stderr of native controllers
reuse the same stdout and stderr as the Elasticsearch JVM (which
are by default redirected to es.stdout.log and es.stderr.log) so
that if something unexpected is written to native controller
output then:

1. The native controller process does not block, waiting for
   something to read the output
2. We can see what the output was, making it easier to debug
   obscure environmental problems

Backport of #56491

Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
2020-05-13 22:57:00 +01:00
Nik Everett b98b260048
Merge significant_terms into the terms package (backport of #56699) (#56715)
This merges the code for the `significant_terms` agg into the package
for the code for the `terms` agg. They are *super* entangled already,
this mostly just admits that to ourselves.

Precondition for the terms work in #56487
2020-05-13 17:36:21 -04:00
Luca Cavanna 34410814b9
Don't omit empty arrays when filtering _source (#56527)
When using source filtering exclusions, empty arrays are not preserved in documents, and no empty arrays are returned if arrays are empty after applying exclusions. We have special treatment to make sure that we preserve empty objects, but the behaviour for arrays is different.

It looks like this regression was introduced by #22593, shortly after we refactored source filtering to use automata (#20736).

Note that this change affects what the search API returns when using source exclusions, as well as what gets indexed when using source exclusions for the _source field.

Closes #23796
2020-05-13 23:24:21 +02:00
Nik Everett 126619ae3c
Add list of defered aggregations to the profiler (backport of #56208) (#56682)
This adds a few things to the `breakdown` of the profiler:
* `histogram` aggregations now contain `total_buckets` which is the
  count of buckets that they collected. This could be useful when
  debugging a histogram inside of another bucketing agg that is fairly
  selective.
* All bucketing aggs that can delay their sub-aggregations will now add
  a list of delayed sub-aggregations. This is useful because we
  sometimes have fairly involved logic around which sub-aggregations get
  delayed and this will save you from having to guess.
* Aggregtations wrapped in the `MultiBucketAggregatorWrapper` can't
  accurately add anything to the breakdown. Instead they the wrapper
  adds a marker entry `"multi_bucket_aggregator_wrapper": true` so we
  can be quickly pick out such aggregations when debugging.

It also fixes a bug where `_count` breakdown entries were contributing
to the overall `time_in_nanos`. They didn't add a large amount of time
so it is unlikely that this caused a big problem, but I was there.

To support the arbitrary breakdown data this reworks the profiler so
that the `breakdown` can contain any data that is supported by
`StreamOutput#writeGenericValue(Object)` and
`XContentBuilder#value(Object)`.
2020-05-13 16:33:22 -04:00
Julie Tibshirani 1ad83c37c4
Use index sort range query when possible. (#56710)
This PR proposes to use `IndexSortSortedNumericDocValuesRangeQuery` when
possible to speed up certain range queries. Points-based queries are already
very efficient, the only time this query makes a difference is when the range
matches a large number of documents.

Relates to #48665.
2020-05-13 13:24:45 -07:00
Jason Tedor 5ca2ea2dde
Allow removing replicas setting on closed indices (#56680)
This is similar to a previous change that allowed removing the number of
replicas settings (so setting it to its default) on open indices. This
commit allows the same for closed indices.

It is unfortunate that we have separate branches for handling open and
closed indices here, but I do not see a clean way to merge these two
together without making a rather unnatural method (note that they invoke
different methods for doing the settings updates). For now, we leave
this as-is even though it led to the miss here.
2020-05-13 15:56:58 -04:00
Mark Vieira e3be18a443
Add version 6.8.10 2020-05-13 11:27:40 -07:00
Bogdan Pintea 2f0663c490 Add the 7.7.1 Version
Add the bumped 7.7 branch new version, 7.7.1
2020-05-13 18:46:07 +02:00
Ignacio Vera b4521d5183
upgrade to Lucene 8.6.0 snapshot (#56661) 2020-05-13 14:25:16 +02:00
Jason Tedor 4394235c63
Allow removing index.number_of_replicas setting (#56656)
Today a user can create an index without setting the
index.number_of_replicas setting even though the index metadata requires
that the setting has a value. We do this when creating an index by
explicitly settings index.number_of_replicas to a default value if one
is not provided. However, if a user updates the number of replicas, and
then let wants to return to the default value, they are naturally
inclined to try setting this setting to null, as the agreed upon way to
return a setting to its default. Since the index metadata requires that
this setting has a non-null value, we blow up when a user attempts to
make this change. This is because we are not taking the same action when
updating a setting on an index that we take when create an
index. Namely, we are not explicitly setting index.number_of_replicas if
the request does not carry a value for this setting. This would happen
when nulling the setting, which we want to support. This commit
addresses this by setting index.number_of_replicas to the default if the
value for this setting is null when updating the settings for an index.
2020-05-13 06:25:43 -04:00
Christoph Büscher 73b64908b2
Fix `time_zone` on `query_string` and date fields (#55881) (#56668)
Currently the `time_zone` parameter in `query_string` queries gets applied
correctly only when using the range syntax, e.g "date:[2020-01-02 TO
2020-01-05]. When a date field gets searched without explicit range syntax, e.g.
"date:"2020-01-01" we internally create a range query than uses the specified
date as start date and rounds up to the next underspecified units for the end
date (e.g. here 2020-01-01T23:59:59) without considering the `time_zone`
settings. This change adds a check in QueryStringQueryParser to detect this
scenario early where we have access to the time zone information and directly
create a range query using it.

Closes #55813
2020-05-13 11:20:25 +02:00
Henning Andersen 48a8c7eb88
Ensure search contexts are removed on index delete (#56335) (#56617)
In a race condition, a search context could remain enlisted in
SearchService when an index is deleted, potentially causing the index
folder to not be cleaned up (for either lengthy searches or scrolls with
timeouts > 30 minutes or if the scroll is kept active).
2020-05-13 09:41:02 +02:00
Jake Landis a56fb6192e
[7.x] Fix ingest simulate verbose on failure with conditional (#56478) (#56635)
If a conditional is added to a processor, and that processor fails, and 
that processor has an on_failure handler, the full trace of all of the 
executed processors may not be displayed in simulate verbose. The 
information is correct, but misses displaying some of the steps used 
to get there.

This happens because a processor that is conditional processor is a 
wrapper around the real processor and a processor with an on_failure 
handler is also a wrapper around the processor(s). When decorating for 
simulation we treat compound processor specially, but if a compound processor
is wrapped by a conditional processor that compound processor's processors 
can be missed for decoration resulting in the missing displayed steps.

The fix to this is to treat the conditional processor specially and
explicitly seperate it from the processor it is wrapping. This requires
us to keep track of 2 processors a possible conditional processor and
the actual processor it may be wrapping.

related: #56004
2020-05-12 15:41:05 -05:00
Armin Braun 0a879b95d1
Save Bounds Checks in BytesReference (#56577) (#56621)
Two spots that allow for some optimization:

* We are often creating a composite reference of just a single item in
the transport layer => special cased via static constructor to make sure we never do that
   * Also removed the pointless case of an empty composite bytes ref
* `ByteBufferReference` is practically always created from a heap buffer these days so there
is no point of dealing with all the bounds checks and extra references to sliced buffers from that
and we can just use the underlying array directly
2020-05-12 20:33:45 +02:00
Jason Tedor f7b8f0b2f4
Adjust warning for heap size bootstrap check (#56565)
Today the heap size check warns the user about two issues why they might
care about the heap size check: resize pauses, and if memory locking is
enabled. Yet, we unconditionally make mention of the memory locking
reason, even if memory locking is not enabled. This can confuse some
users, so we adjust the warning about memory locking to only display if
memory locking is enabled.
2020-05-12 14:31:21 -04:00
Martijn van Groningen 0c61bc63e4
Backport: auto create data streams using index templates v2 (#56596)
Backport: #55377

This commit adds the ability to auto create data streams using index templates v2.
Index templates (v2) now have a data_steam field that includes a timestamp field,
if provided and index name matches with that template then a data stream
(plus first backing index) is auto created.

Relates to #53100
2020-05-12 17:01:15 +02:00
Dan Hermann dfdd7e4fce
Report used memory as zero when total memory cannot be obtained (#56412) 2020-05-12 07:43:51 -05:00
Ignacio Vera 222ee721ec
Add moving percentiles pipeline aggregation (#55441) (#56575)
Similar to what the moving function aggregation does, except merging windows of percentiles
sketches together instead of cumulatively merging final metrics
2020-05-12 11:35:23 +02:00
Martijn van Groningen 7b1f978931
Move data stream test (#56505) (#56570)
Move data stream resolvability test from IndicesOptionsIntegrationIT to DataStreamIT class.
Whether a transport action supports data streams is no longer controlled via indices options.
2020-05-12 10:44:13 +02:00
Armin Braun 2d08ef729c
Deduplicate Strings in REST Bulk Request Parsing (#56506) (#56568)
We can save a little memory here since these strings might live for quite
a while on the coordinating node.
2020-05-12 09:52:44 +02:00
Ryan Ernst 902fc546bd
Migrate remaining ESIntegTestCases to internalClusterTest (#56479) (#56563)
This commit migrates the ESIntegTestCase tests in x-pack to the
internalClusterTest source set.
2020-05-11 21:06:04 -07:00
Nik Everett 137df274ab
Add support for numeric range keys (#56452) (#56552)
This adds support for parsing numbers as range keys. They get converted
into a string, but we allow numbers.

While I was there I replaced the parser for `Range` with a
`ConstructingObjectParser` which will automatically add support for "did
you mean" style corrections on errors.

Closes #56402
2020-05-11 19:48:59 -04:00
Nick Knize 9b64149ad2
[Geo] Refactor Point Field Mappers (#56060) (#56540)
This commit refactors the following:
  * GeoPointFieldMapper and PointFieldMapper to
    AbstractPointGeometryFieldMapper derived from AbstractGeometryFieldMapper.
  * .setupFieldType moved up to AbstractGeometryFieldMapper
  * lucene indexing moved up to AbstractGeometryFieldMapper.parse
  * new addStoredFields, addDocValuesFields abstract methods for implementing
    stored field and doc values field indexing in the concrete field mappers

This refactor is the next phase for setting up a framework for extending
spatial field mapper functionality in x-pack.
2020-05-11 17:11:36 -05:00
Benjamin Trent 1d6b2f074e
[Transform] adds geotile_grid support in group_by (#56514) (#56549)
This adds support for grouping by geo points. This uses the agg [geotile_grid](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-geotilegrid-aggregation.html).

I am opting to store the tile results of group_by as a `geo_shape` so that users can query the results. Additionally, the shapes could be visualized and filtered in the kibana maps app.

relates to https://github.com/elastic/elasticsearch/issues/56121
2020-05-11 17:02:40 -04:00
Lee Hinman 1337b35572
Remove prefer_v2_templates query string parameter (#56545)
This commit removes the `prefer_v2_templates` flag and setting. This was a brief setting that
allowed specifying whether V1 or V2 template should be used when an index is created. It has been
removed in favor of V2 templates always having priority.

Relates to #53101
Resolves #56528

This is not a breaking change because this flag was never in a released version.
2020-05-11 14:56:42 -06:00
zhenxianyimeng 8e96e5c936
Use CollectionUtils.isEmpty where appropriate (#55910)
This commit uses the isEmpty utility method for arrays in place of null and greater than zero checks.
2020-05-11 09:55:57 -07:00
Martijn van Groningen 32471abc0e
Check whether data stream feature flag is enabled before deleting all data streams, (#56517) (#56520)
this will fix the release build.
2020-05-11 18:34:50 +02:00
Martijn van Groningen 9ae09570d8
Allow a number of broadcast transport actions to resolve data streams (#55726) (#56502)
Change TransportBroadcastByNodeAction and TransportBroadcastReplicationAction
to be able to resolve data streams by default. Implementations can change this ability.

This change allows to following APIs to resolve data streams: flush,
refresh (already supported data streams), force merge, clear indices cache,
indices stats (already supported data streams), segments, upgrade stats, 
upgrade, validate query, searchable snapshots stats, clear searchable snapshots cache and
reload analyzers APIs.

Relates to #53100
2020-05-11 12:48:35 +02:00
Nik Everett 2823300bdf
Speed up rounding in auto_date_histogram (#56384) (#56486)
This wires `auto_date_histogram` into the rounding optimization that I
built in #55559. This is should significantly speed up any
`auto_date_histogram`s with `time_zone`s on them.
2020-05-09 11:37:30 -04:00
Nik Everett 2f38aeb5e2
Save memory when numeric terms agg is not top (#55873) (#56454)
Right now all implementations of the `terms` agg allocate a new
`Aggregator` per bucket. This uses a bunch of memory. Exactly how much
isn't clear but each `Aggregator` ends up making its own objects to read
doc values which have non-trivial buffers. And it forces all of it
sub-aggregations to do the same. We allocate a new `Aggregator` per
bucket for two reasons:

1. We didn't have an appropriate data structure to track the
   sub-ordinals of each parent bucket.
2. You can only make a single call to `runDeferredCollections(long...)`
   per `Aggregator` which was the only way to delay collection of
   sub-aggregations.

This change switches the method that builds aggregation results from
building them one at a time to building all of the results for the
entire aggregator at the same time.

It also adds a fairly simplistic data structure to track the sub-ordinals
for `long`-keyed buckets.

It uses both of those to power numeric `terms` aggregations and removes
the per-bucket allocation of their `Aggregator`. This fairly
substantially reduces memory consumption of numeric `terms` aggregations
that are not the "top level", especially when those aggregations contain
many sub-aggregations. It also is a pretty big speed up, especially when
the aggregation is under a non-selective aggregation like
the `date_histogram`.

I picked numeric `terms` aggregations because those have the simplest
implementation. At least, I could kind of fit it in my head. And I
haven't fully understood the "bytes"-based terms aggregations, but I
imagine I'll be able to make similar optimizations to them in follow up
changes.
2020-05-08 20:38:53 -04:00
Nik Everett bd4b9dd10e
Speed up time interval arounding around dst (backport #56371) (#56396)
When an index spans a daylight savings time transition we can't use our
optimization that rewrites the requested time zone to a fixed time zone
and instead we used to fall back to a java.util.time based rounding
implementation. In #55559 we optimized "time unit" rounding. This
optimizes "time interval" rounding.

The java.util.time based implementation is about 1650% slower than the
rounding implementation for a fixed time zone. This replaces it with a
similar optimization that is only about 30% slower than the fixed time
zone. The java.util.time implementation allocates a ton of short lived
objects but the optimized implementation doesn't. So it *might* end up
being faster than the microbenchmarks imply.
2020-05-08 13:39:27 -04:00
Armin Braun b18d242300
Fix Simulate Template Endpoint Temporary Index Handling (#56406) (#56432)
Use proper facility for creating temporary index service for the simulation
that does not add itself to the `IndicesService` unnecessarily (breaking an assertion about the
internal consistency of the cluster state and the `IndicesService`).

Closes #56298
2020-05-08 18:05:24 +02:00
Martijn van Groningen 83739b5806
Backport: allow cluster health api to resolve data streams (#56425)
Backport of: #56413

Allow cluster health api to resolve data streams and
automatically remove data streams after each test in
test cases extending from `ESIntegTestCase`

Relates to #53100
2020-05-08 17:16:25 +02:00
Tanguy Leroux 8e9b69bfd7
Use snapshot information to build searchable snapshot store MetadataSnapshot (#56289) (#56403)
While investigating possible optimizations to speed up searchable
snapshots shard restores, we noticed that Elasticsearch builds the
list of shard files on local disk in order to compare it with the list of
files contained in the snapshot to restore. This list of files is
materialized with a MetadataSnapshot object whose construction
involves to read the footer checksum of every files of the shard
using Store.checksumFromLuceneFile() method.

Further investigation shows that a MetadataSnapshot object is
also created for other types of operations like building the list of
files to recover in a peer recovery (and primary shard relocation)
or in order to assign a shard to a node. These operations use the
Store.getMetadata(IndexCommit) method to build the list of files
and checksums.

In the case of searchable snapshots building the MetadataSnapshot
object can potentially trigger cache misses, which in turn can
cause the download and the writing in cache of the last range of
the file in order to check the 16 bytes footer. This in turn can
cause more evictions.

Since searchable snapshots already contains the footer information
of every file in BlobStoreIndexShardSnapshot it can directly read the
checksum from it and avoid to use the cache at all to create a
MetadataSnapshot for the operations mentioned above.

This commit adds a shortcut to the
SearchableSnapshotDirectory.openInput() method - similarly to what
already exists for segment infos - so that it creates a specific
IndexInput for checksum reading operation.
2020-05-08 14:16:19 +02:00
Armin Braun 085ff8c404
Add More Trace Logging to BlobStoreRepository (#56336) (#56401)
Adding more trace logging that would be helpful in understanding
the precise order of blob-level operations if needed.
2020-05-08 08:31:32 +02:00
Tal Levy 13944b1bf9 Fix max-int limit for number of points reduced in geo_centroid (#56370)
A bug in InternalGeoCentroid#reduce existed that summed up
the aggregation's long-valued counts into a local integer variable.
Since it is definitely possible to reduce more than Integer.MAX points,
this change simply updates that variable to be a long-valued number.

Closes #55992.
2020-05-07 14:30:29 -07:00
Tal Levy 6e0178fb68
Move CumulativeSumPipelineAgg to use ConstructingObjectParser parsing (#55990) (#56380)
As part of #52776, this refactors the aggregation to use
the context parser to parse its parameters.
2020-05-07 12:34:54 -07:00
Tim Brooks b84d1e2577
Improve logging around SniffConnectionStrategy (#56378)
Currently, the logging around the SniffConnectionStrategy is limited.
The log messages are inconsistent and sometimes wrong. This commit
cleans up these log message to describe when connections are happening
and what failed if a step fails.

Additionally, this commit enables TRACE logging for a problematic test
(testEnsureWeReconnect).
2020-05-07 13:11:56 -06:00