Commit Graph

5537 Commits

Author SHA1 Message Date
Marios Trivyzas 5c0f26de1d
SQL: [Docs] Fix example for DATETIME_PARSE (#56409)
When no timezone is specified the session timezone is used without
conversion, fix the docs test accordingly.

Follows: #56158
(cherry picked from commit 4b79b19ea5c3d17e05cb8130f3c754ac9bfd2382)
2020-05-12 09:23:00 +02:00
Ryan Ernst 902fc546bd
Migrate remaining ESIntegTestCases to internalClusterTest (#56479) (#56563)
This commit migrates the ESIntegTestCase tests in x-pack to the
internalClusterTest source set.
2020-05-11 21:06:04 -07:00
Nick Knize 9b64149ad2
[Geo] Refactor Point Field Mappers (#56060) (#56540)
This commit refactors the following:
  * GeoPointFieldMapper and PointFieldMapper to
    AbstractPointGeometryFieldMapper derived from AbstractGeometryFieldMapper.
  * .setupFieldType moved up to AbstractGeometryFieldMapper
  * lucene indexing moved up to AbstractGeometryFieldMapper.parse
  * new addStoredFields, addDocValuesFields abstract methods for implementing
    stored field and doc values field indexing in the concrete field mappers

This refactor is the next phase for setting up a framework for extending
spatial field mapper functionality in x-pack.
2020-05-11 17:11:36 -05:00
Tim Brooks 760ab726c2
Share netty event loops between transports (#56553)
Currently Elasticsearch creates independent event loop groups for each
transport (http and internal) transport type. This is unnecessary and
can lead to contention when different threads access shared resources
(ex: allocators). This commit moves to a model where, by default, the
event loops are shared between the transports. The previous behavior can
be attained by specifically setting the http worker count.
2020-05-11 15:43:43 -06:00
Benjamin Trent 1d6b2f074e
[Transform] adds geotile_grid support in group_by (#56514) (#56549)
This adds support for grouping by geo points. This uses the agg [geotile_grid](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-geotilegrid-aggregation.html).

I am opting to store the tile results of group_by as a `geo_shape` so that users can query the results. Additionally, the shapes could be visualized and filtered in the kibana maps app.

relates to https://github.com/elastic/elasticsearch/issues/56121
2020-05-11 17:02:40 -04:00
Lee Hinman 1337b35572
Remove prefer_v2_templates query string parameter (#56545)
This commit removes the `prefer_v2_templates` flag and setting. This was a brief setting that
allowed specifying whether V1 or V2 template should be used when an index is created. It has been
removed in favor of V2 templates always having priority.

Relates to #53101
Resolves #56528

This is not a breaking change because this flag was never in a released version.
2020-05-11 14:56:42 -06:00
Brandon Morelli 659edb92ff
docs: [7.x][apm] link to master in n.x branches (#56539) 2020-05-11 13:42:37 -07:00
zhenxianyimeng 8e96e5c936
Use CollectionUtils.isEmpty where appropriate (#55910)
This commit uses the isEmpty utility method for arrays in place of null and greater than zero checks.
2020-05-11 09:55:57 -07:00
Armin Braun 3ab6eba6bc
Fix RollupJobTaskTests Leaking Threads on Slowness (#56438) (#56518)
We are ensuring order in the two tests changed by waiting on latches.
The problem is, that 3s is a pretty short wait and on CI can randomly be exceeded
by pure chance. If that happened we wouldn't have visibility on it since we didn't
assert that the waits actually worked.
=> Fixed by asserting that the waits work and upping the timeout to our standard 10s
Also, moved to a per-test threadpool to make it simpler to identify which test failed,
should an unexpected task run on a closed client's pool afterall.
2020-05-11 17:24:10 +02:00
Jim Ferenczi 02ab9112a9 Fix spurious failures in AsyncSearchIntegTestCase (#56026)
Async search integration tests are subject to random failures when:
  * The test index has more than one replica.
  * The request cache is used.
  * Some shards are empty.
  * The maintenance service starts a garbage collection when node is closing.

They are also slow because the test index is created/populated on each
test method.

This change refactors these integration tests in order to:
  * Create the index once for the entire test suite.
  * Fix the usage of the request cache and replicas.
  * Ensures that all shards have at least one document.
  * Increase the delay of the maintenance service garbage collection.

Closes #55895
Closes #55988
2020-05-11 15:03:03 +02:00
Martijn van Groningen 9ae09570d8
Allow a number of broadcast transport actions to resolve data streams (#55726) (#56502)
Change TransportBroadcastByNodeAction and TransportBroadcastReplicationAction
to be able to resolve data streams by default. Implementations can change this ability.

This change allows to following APIs to resolve data streams: flush,
refresh (already supported data streams), force merge, clear indices cache,
indices stats (already supported data streams), segments, upgrade stats, 
upgrade, validate query, searchable snapshots stats, clear searchable snapshots cache and
reload analyzers APIs.

Relates to #53100
2020-05-11 12:48:35 +02:00
Rene Groeschke c29bc87040
Move bwcVersions extension property to BuildParams (back port) (#56381)
* Move bwcVersions extension property to BuildParams (#56206)
* Fix :qa Task Using Broken BwC Versions Resolution (#56332)

Co-authored-by: Armin Braun <me@obrown.io>
2020-05-11 09:39:13 +02:00
Nik Everett 2f38aeb5e2
Save memory when numeric terms agg is not top (#55873) (#56454)
Right now all implementations of the `terms` agg allocate a new
`Aggregator` per bucket. This uses a bunch of memory. Exactly how much
isn't clear but each `Aggregator` ends up making its own objects to read
doc values which have non-trivial buffers. And it forces all of it
sub-aggregations to do the same. We allocate a new `Aggregator` per
bucket for two reasons:

1. We didn't have an appropriate data structure to track the
   sub-ordinals of each parent bucket.
2. You can only make a single call to `runDeferredCollections(long...)`
   per `Aggregator` which was the only way to delay collection of
   sub-aggregations.

This change switches the method that builds aggregation results from
building them one at a time to building all of the results for the
entire aggregator at the same time.

It also adds a fairly simplistic data structure to track the sub-ordinals
for `long`-keyed buckets.

It uses both of those to power numeric `terms` aggregations and removes
the per-bucket allocation of their `Aggregator`. This fairly
substantially reduces memory consumption of numeric `terms` aggregations
that are not the "top level", especially when those aggregations contain
many sub-aggregations. It also is a pretty big speed up, especially when
the aggregation is under a non-selective aggregation like
the `date_histogram`.

I picked numeric `terms` aggregations because those have the simplest
implementation. At least, I could kind of fit it in my head. And I
haven't fully understood the "bytes"-based terms aggregations, but I
imagine I'll be able to make similar optimizations to them in follow up
changes.
2020-05-08 20:38:53 -04:00
Mark Vieira 0fb9bc5379
Always use archive base name as the pom artifact id (#56447) (#56467) 2020-05-08 16:11:19 -07:00
Armin Braun 0a254cf223
Serialize Monitoring Bulk Request Compressed (#56410) (#56442)
Even with changes from #48854 we're still seeing significant (as in tens and hundreds of MB)
buffer usage for bulk exports in some cases which destabilizes master nodes.
Since we need to know the serialized length of the bulk body we can't do the serialization
in a streaming manner. (also it's not easily doable with the HTTP client API we're using anyway).
=> let's at least serialize on heap in compressed form and decompress as we're streaming to the
HTTP connection. For small requests this adds negligible overhead but for large requests this reduces
the size of the payload field by about an order of magnitude (empirically determined) which is a massive reduction in size when considering O(100MB) bulk requests.
2020-05-08 23:16:07 +02:00
Dimitris Athanasiou 44ffa388ac
[7.x][ML] Use non-zero timeout when force stopping DF analytics (#56423) (#56428)
We have been using a zero timeout in the case that DF analytics
is stopped. This may cause a timeout when we cancel, for example,
the reindex task.

This commit fixes this by using the default timeout instead.

Backport of #56423
2020-05-08 21:12:11 +03:00
David Roberts 9a3924a641
[ML] Adjust list of platforms that have ML native code (#56426)
Native code is now available for linux-aarch64.

Note that it is _not_ currently supported!
2020-05-08 16:22:45 +01:00
Dimitris Athanasiou c117ae7a6e
[7.x][ML] Force stopping stopped DF analytics should succeed (#56421) (#56424)
Force stopping a DF analytics job whose config exists and that
is stopped should succeed. This was broken by #56360.

Closes #56414

Backport of #56421
2020-05-08 18:04:24 +03:00
Tanguy Leroux 8e9b69bfd7
Use snapshot information to build searchable snapshot store MetadataSnapshot (#56289) (#56403)
While investigating possible optimizations to speed up searchable
snapshots shard restores, we noticed that Elasticsearch builds the
list of shard files on local disk in order to compare it with the list of
files contained in the snapshot to restore. This list of files is
materialized with a MetadataSnapshot object whose construction
involves to read the footer checksum of every files of the shard
using Store.checksumFromLuceneFile() method.

Further investigation shows that a MetadataSnapshot object is
also created for other types of operations like building the list of
files to recover in a peer recovery (and primary shard relocation)
or in order to assign a shard to a node. These operations use the
Store.getMetadata(IndexCommit) method to build the list of files
and checksums.

In the case of searchable snapshots building the MetadataSnapshot
object can potentially trigger cache misses, which in turn can
cause the download and the writing in cache of the last range of
the file in order to check the 16 bytes footer. This in turn can
cause more evictions.

Since searchable snapshots already contains the footer information
of every file in BlobStoreIndexShardSnapshot it can directly read the
checksum from it and avoid to use the cache at all to create a
MetadataSnapshot for the operations mentioned above.

This commit adds a shortcut to the
SearchableSnapshotDirectory.openInput() method - similarly to what
already exists for segment infos - so that it creates a specific
IndexInput for checksum reading operation.
2020-05-08 14:16:19 +02:00
Dimitris Athanasiou 60b1c67409
[7.x][ML] Allow stopping DF analytics whose config is missing (#56360) (#56408)
It is possible that the config document for a data frame
analytics job is deleted from the config index. If that is
the case the user is unable to stop a running job because
we attempt to retrieve the config and that will throw.

This commit changes that. When the request is forced,
we do not expand the requested ids based on the existing
configs but from the list of running tasks instead.

Backport of #56360
2020-05-08 13:54:44 +03:00
Hendrik Muhs cc35d37788 [Transform] unmute transform upgrade tests (#56296)
the transform upgrade tests broke due to #56238, but got fixed with #56274

fixes #56269
fixes #56250
2020-05-08 10:48:58 +02:00
Dimitris Athanasiou d064eda2b0
[7.x][ML] Ensure phase progress may only increase (#56339) (#56357)
Due to multi-threading it is possible that phase progress
updates written from the c++ process arrive reordered.
We can address this by ensuring that progress may only increase.

Closes #56282

Backport of #56339
2020-05-07 19:46:58 +03:00
William Brafford 691044e67b
Add xpack setting deprecations to deprecation API (#56290)
* Add xpack setting deprecations to deprecation API

The deprecated settings showed up in the deprecation log file by
default, but I did not add them to the deprecation API. This commit
fixes that. Now if you use one of the deprecated basic feature
enablement settings, calling _monitoring/deprecations will inform you of
that fact.

* Remove incorrectly backported settings documents

It seems that I backported these docs to the wrong place in #56061,
in #55980, and in #56167. I hope they're in the right place now.

Co-authored-by: debadair <debadair@elastic.co>
2020-05-07 10:28:17 -04:00
Nik Everett e35919d3b8
Optimize date_histograms across daylight savings time (backport of #55559) (#56334)
Rounding dates on a shard that contains a daylight savings time transition
is currently something like 1400% slower than when a shard contains dates
only on one side of the DST transition. And it makes a ton of short lived
garbage. This replaces that implementation with one that benchmarks to
having around 30% overhead instead of the 1400%. And it doesn't generate
any garbage per search hit.

Some background:
There are two ways to round in ES:
* Round to the nearest time unit (Day/Hour/Week/Month/etc)
* Round to the nearest time *interval* (3 days/2 weeks/etc)

I'm only optimizing the first one in this change and plan to do the second
in a follow up. It turns out that rounding to the nearest unit really *is*
two problems: when the unit rounds to midnight (day/week/month/year) and
when it doesn't (hour/minute/second). Rounding to midnight is consistently
about 25% faster and rounding to individual hour or minutes.

This optimization relies on being able to *usually* figure out what the
minimum and maximum dates are on the shard. This is similar to an existing
optimization where we rewrite time zones that aren't fixed
(think America/New_York and its daylight savings time transitions) into
fixed time zones so long as there isn't a daylight savings time transition
on the shard (UTC-5 or UTC-4 for America/New_York). Once I implement
time interval rounding the time zone rewriting optimization *should* no
longer be needed.

This optimization doesn't come into play for `composite` or
`auto_date_histogram` aggs because neither have been migrated to the new
`DATE` `ValuesSourceType` which is where that range lookup happens. When
they are they will be able to pick up the optimization without much work.
I expect this to be substantial for `auto_date_histogram` but less so for
`composite` because it deals with fewer values.

Note: My 30% overhead figure comes from small numbers of daylight savings
time transitions. That overhead gets higher when there are more
transitions in logarithmic fashion. When there are two thousand years
worth of transitions my algorithm ends up being 250% slower than rounding
without a time zone, but java time is 47000% slower at that point,
allocating memory as fast as it possibly can.
2020-05-07 09:10:51 -04:00
Tanguy Leroux 6233e32ab3 Fix SearchableSnapshotDirectoryTests.testIndexSearcher() (#56275)
Closes #56233
2020-05-07 11:12:35 +02:00
Tanguy Leroux 65a061e33a Fix SearchableSnapshotDirectoryTests.testClearCache (#56277)
This test sometimes fails when prewarming is enabled because 
it's possible that some files are cached in background while the 
test tries to clear the cache. This commit disables prewarming 
for this test.
2020-05-07 10:59:33 +02:00
Andrei Stefan 980f175222
EQL: simplify equals/not-equals TRUE/FALSE expressions (#56191) (#56306)
* Simplify equals/not-equals TRUE/FALSE expressions, by returning them
as is (TRUE variant) or negating them (FALSE variant)

(cherry picked from commit 17858afbe6da5fa0b3ecfc537cabb337e4baaffe)
2020-05-07 03:02:04 +03:00
Jason Tedor c775e47054
Fix missing SHAs for Jackson 2.10.4
This was not picked up on a backport, so this commit adds the missing
SHAs, and removes the old ones.
2020-05-06 17:28:24 -04:00
Jason Tedor 33669c0420
Upgrade to Jackson 2.10.4 (#56188)
Another Jackson release is available. There are some CVEs addressed,
none of which impact us, but since we can now bump Jackson easily, let
us move along with the train to avoid the false positives from security
scanners.
2020-05-06 17:20:23 -04:00
Przemysław Witek 0cd0ab276e
Introduce Annotation.Builder class and use it to create instances of Annotation class (#56276) (#56286) 2020-05-06 20:47:03 +02:00
Julie Tibshirani e852bb29b7
Simplify signature of FieldMapper#parseCreateField. (#56144)
`FieldMapper#parseCreateField` accepts the parse context, plus a list of fields
as an output parameter. These fields are immediately added to the document
through `ParseContext#doc()`.

This commit simplifies the signature by removing the list of fields, and having
the mappers add the fields directly to `ParseContext#doc()`. I think this is
nicer for implementors, because previously fields could be added either through
the list, or the context (through `add`, `addWithKey`, etc.)
2020-05-06 11:12:09 -07:00
Dimitris Athanasiou 011e995165
[7.x][ML] Unmute ClssificationIT.testDependentVariableCardinalityTooHighButWithQueryMakesItWithinRange (#56268) (#56287)
Closes #56240
2020-05-06 18:20:46 +03:00
Navneet Kumar a649f85358
[DOCS] Create API key API requires `name` request body param (#56262)
Fixes #56164. A minor update in the documentation, API key name is required when creating API key. If the API key name is not provided then the request will fail.
2020-05-06 08:52:45 -04:00
Luca Cavanna 9a9cb68e83 Async Search: correct shards counting (#55758)
Async search allows users to retrieve partial results for a running search. For partial results, the number of successful shards does not include the skipped shards, while the response returned to users should.

Also, we recently had a bug where async search would miss tracking shard failures, which would have been caught if we had assertions in place that verified that whenever we get the last response, the number of failures included in it is the same as the failures that were tracked through the listener notifications.
2020-05-06 12:13:30 +02:00
Tanguy Leroux 07ad742b60 Enable prewarming by default for searchable snapshots (#56201)
Now searchable snapshots directories respect the repository 
rate limitations (#55952) we can enable prewarming by default 
for shards.
2020-05-06 10:18:34 +02:00
Tanguy Leroux 131a3911eb Replace BlobContainerWrapper by FilterBlobContainer (#56200)
A FilterBlobContainer class was introduced in #55952 and it delegates
 its behavior to a given BlobContainer while allowing to override 
only necessary methods.

This commit replaces the existing BlobContainerWrapper class from 
the test framework with the new FilterBlobContainer from core.
2020-05-06 10:05:43 +02:00
Julie Tibshirani bd7a2d2b01 Mute the geogrid agg circuit breaker tests. 2020-05-05 18:09:07 -07:00
Julie Tibshirani dc738e34d2 Mute the mixed cluster 80_transform_jobs_crud test. 2020-05-05 17:58:17 -07:00
Julie Tibshirani 7c55db9b04 Mute TransformSurvivesUpgradeIT#testTransformRollingUpgrade. 2020-05-05 17:37:17 -07:00
Jake Landis a22690c9ca
[7.x] Ensure that the monitoring export exceptions are logged. (#56237) (#56251)
If an exception occurs while flushing a bulk the cause of the exception
can be lost. This commit ensures that cause of the exception is carried
forward and gets logged.
2020-05-05 19:24:26 -05:00
Julie Tibshirani 133ba2691f Make sure to mute all 80_transform_jobs_crud tests. 2020-05-05 17:07:59 -07:00
Julie Tibshirani 49de092b38 Mute RegressionIT.testTwoJobsWithSameRandomizeSeedUseSameTrainingSet. 2020-05-05 16:25:36 -07:00
Bogdan Pintea 47250b14a4
SQL: Add BigDecimal support to JDBC (#56015) (#56220)
* SQL: Add BigDecimal support to JDBC (#56015)

* Introduce BigDecimal support to JDBC -- fetching

This commit adds support for the getBigDecimal() methods.

* Allow BigDecimal params in double range

A prepared statement will now accept a BigDecimal parameter as a proxy
for a double, if the conversion is lossless.

(cherry picked from commit e9a873ad7f387682e3472110b1d7c0514bd347c9)

* Fix compilation error

Dimond notation with anonymous inner classes not avail in Java8.
2020-05-05 23:19:36 +02:00
Bogdan Pintea f159fd8a20
Fix test on incompatible client versions (#56234) (#56241)
The incomatible client version test is changed to:
- iterate on all versions prior to the allowed one_s;
- format the exception message just as the server does it.

The defect stemed from the fact that the clients will not send a
version's qualifier, but just major.minor.revision, so the raised
error/exception_message won't contain it, while the test expected it.

(cherry picked from commit 4a81c8f7a1f4573e3be95f346d9fb18772b297ee)
2020-05-05 23:18:29 +02:00
Julie Tibshirani 63062ec7bd Mute ClassificationIT.testDependentVariableCardinalityTooHighButWithQueryMakesItWithinRange. 2020-05-05 13:48:35 -07:00
Dan Hermann 6674f14fb3
[7.x] Get index includes parent data stream for backing indices (#56238) 2020-05-05 15:43:42 -05:00
Benjamin Trent e1c5ca421e
[7.x] [ML] lay ground work for handling >1 result indices (#55892) (#56192)
* [ML] lay ground work for handling >1 result indices (#55892)

This commit removes all but one reference to `getInitialResultsIndexName`. 
This is to support more than one result index for a single job.
2020-05-05 15:54:08 -04:00
Julie Tibshirani 793f265451 Mute SearchableSnapshotDirectoryTests.testIndexSearcher. 2020-05-05 12:29:05 -07:00
Ross Wolf 389082033e
EQL: Add concat function (#55193)
* EQL: Add concat function
* EQL: for loop spacing for concat
* EQL: return unresolved arguments to concat early
* EQL: Add concat integration tests
* EQL: Fix concat query fail test
* EQL: Add class for concat function testing
* EQL: Add concat integration tests
* EQL: Update concat() null behavior
2020-05-05 12:53:34 -06:00
Bogdan Pintea 23c35e32f2
SQL: introduce a query builder for the Rest tests (#55094) (#56221)
* Introduce a query builder for the rest tests

The new BaseRestSqlTestCase.RequestObjectBuilder class is a helper class
to build REST request objects for the tests. Consequently, "manual" string
concatenation to form JSON is done away with.

The class mimics SqlQueryRequestBuilder API.

(cherry picked from commit c8363f04c029542c233a758e9286d33c51d9c0c4)
2020-05-05 18:55:41 +02:00
Tal Levy e4f2c3105d
Add geo_shape support for geotile_grid and geohash_grid (#55966) (#56228)
this commit adds aggregation support for the geo_shape field
type on geo*_grid aggregations.

it introduces a Tiler for both tiles and hashes that enables a new type of
ValuesSource to replace the GeoPoint's CellIdSource. This makes it possible
for the existing Aggregator to be re-used, so no new implementations of
the grid aggregators are added.
2020-05-05 09:54:14 -07:00
Benjamin Trent 641f598364
[Transform] fixes http status code when bad scripts are provided (#56117) (#56219)
Transforms should propagate up the search execution exception if one is returned when it does the test query. 

this allows transforms to return a `4xx` when the aggs are malformed but parseable. 

closes https://github.com/elastic/elasticsearch/issues/55994
2020-05-05 12:36:22 -04:00
Bogdan Pintea 0e5632dc3a
SQL: relax version lock between server and clients (#56148) (#56223)
* Relax version lock between ES/SQL and clients

Allow older-than-server clients to connect, if these are past or on a
certain min release.

(cherry picked from commit 108f907297542ce649aa7304060aaf0a504eb699)
2020-05-05 18:27:06 +02:00
William Brafford 3499fa917c
Deprecated xpack "enable" settings should be no-ops (#55416) (#56167)
The following settings are now no-ops:

* xpack.flattened.enabled
* xpack.logstash.enabled
* xpack.rollup.enabled
* xpack.slm.enabled
* xpack.sql.enabled
* xpack.transform.enabled
* xpack.vectors.enabled

Since these settings no longer need to be checked, we can remove settings
parameters from a number of constructors and methods, and do so in this
commit.

We also update documentation to remove references to these settings.
2020-05-05 10:40:49 -04:00
Tanguy Leroux b9636713b1
Searchable Snapshots should respect max_restore_bytes_per_sec (#55952) (#56199)
This commit changes searchable snapshots so that it now respects the 
repository's max_restore_bytes_per_sec setting when it downloads blobs.

Backport of #55952 for 7.x
2020-05-05 15:43:06 +02:00
David Roberts 7aa0daaabd
[7.x][ML] More advanced model snapshot retention options (#56194)
This PR implements the following changes to make ML model snapshot
retention more flexible in advance of adding a UI for the feature in
an upcoming release.

- The default for `model_snapshot_retention_days` for new jobs is now
  10 instead of 1
- There is a new job setting, `daily_model_snapshot_retention_after_days`,
  that defaults to 1 for new jobs and `model_snapshot_retention_days`
  for pre-7.8 jobs
- For days that are older than `model_snapshot_retention_days`, all
  model snapshots are deleted as before
- For days that are in between `daily_model_snapshot_retention_after_days`
  and `model_snapshot_retention_days` all but the first model snapshot
  for that day are deleted
- The `retain` setting of model snapshots is still respected to allow
  selected model snapshots to be retained indefinitely

Backport of #56125
2020-05-05 14:31:58 +01:00
Hendrik Muhs faadb388da
mute mixed continuous transforms upgrade test (#56198)
mute transform upgrade test, see #56196
2020-05-05 14:40:50 +02:00
David Turner 40ea0eabd9 Forbid snapshot access on applier thread (#56044)
This commit strengthens the assertion about which threads may access a blob
store to exclude the cluster applier thread, since we no longer need to do so.

Relates #50999
2020-05-05 13:27:55 +01:00
Dimitris Athanasiou 2d7899c83c
[7.x][ML] Adjust DF Analytics process phases (#56107) (#56177)
As of elastic/ml-cpp#1179, the analytics process reports phases
depending on the analysis type. This commit adjusts the phases
of current analyses from `analyzing` to the following:

 - outlier_detection: [`computing_outlier`]
 - regression/classification: [`feature_selection`, `coarse_parameter_search`, `fine_tuning_parameters`, `final_training`]

Backport of #56107
2020-05-05 15:00:07 +03:00
Dimitris Athanasiou 75dadb7a6d
[7.x][ML] Add loss_function to regression (#56118) (#56187)
Adds parameters `loss_function` and `loss_function_parameter`
to regression.

Backport of #56118
2020-05-05 14:59:51 +03:00
Hendrik Muhs e177a38504
[7.x][Transform] add throttling (#56007) (#56184)
add throttling to transform, throttling will slow down search requests by
delaying the execution based on a documents per second metric.

fixes #54862
2020-05-05 13:09:02 +02:00
Marios Trivyzas 363e994171
SQL: Fix DATETIME_PARSE behaviour regarding timezones (#56158) (#56182)
Previously, when the timezone was missing from the datetime string
and the pattern, UTC was used, instead of the session defined timezone.
Moreover, if a timezone was included in the datetime string and the
pattern then this timezone was used. To have a consistent behaviour
the resulting datetime will always be converted to the session defined
timezone, e.g.:
```
SELECT DATETIME_PARSE('2020-05-04 10:20:30.123 +02:00', 'HH:mm:ss dd/MM/uuuu VV') AS datetime;
```
with `time_zone` set to `-03:00` will result in
```
2020-05-04T05:20:40.123-03:00
```

Follows: #54960
(cherry picked from commit 8810ed03a209cc8fe1bad309a81e85b56a39da27)
2020-05-05 12:08:39 +02:00
Tanguy Leroux f717830563
Use workers to warm cache parts (#55793) (#56181)
Today the cache prewarming introduced in #55322 works by 
enqueuing altogether the files parts to warm in the 
searchable_snapshots thread pool. In order to make this fairer
 among concurrent warmings, this commit starts workers that 
concurrently polls file parts to warm from a queue, warms the 
part and then immediately schedule another warming 
execution. This should leave more room for concurrent 
shard warming to sneak in and be executed.

Relates #55322
2020-05-05 11:48:06 +02:00
Tanguy Leroux 35622747fd
Add Minio tests for searchable snapshots (#56112) (#56179)
This commit adds QA tests for searchable snapshot on MinIO,
similarly to what already exist for S3, GCS and Azure.
2020-05-05 11:40:06 +02:00
Marios Trivyzas cc21468559
SQL: Fix issue with date range queries and timezone (#56115) (#56174)
Previously, the timezone parameter was not passed to the RangeQuery
and as a results queries that use the ES date math notation (now,
now-1d, now/d, now/h, now+2h, etc.) were using the UTC timezone and
not the one passed through the "timezone"/"time_zone" JDBC/REST params.
As a consequence, the date math defined dates were always considered in
UTC and possibly led to incorrect results for queries like:
```
SELECT * FROM t WHERE date BETWEEN now-1d/d AND now/d
```

Fixes: #56049
(cherry picked from commit 300f010c0b18ed0f10a41d5e1606466ba0a3088f)
2020-05-05 10:54:23 +02:00
Dimitris Athanasiou 6061aa3db4
[7.x][ML] Fix race condition updating reindexing progress (#56135) (#56146)
In #55763 I thought I could remove the flag that marks
reindexing was finished on a data frame analytics task.
However, that exposed a race condition. It is possible that
between updating reindexing progress to 100 because we
have called `DataFrameAnalyticsManager.startAnalytics()` and
a call to the _stats API which updates reindexing progress via the
method `DataFrameAnalyticsTask.updateReindexTaskProgress()` we
end up overwriting the 100 with a lower progress value.

This commit fixes this issue by bringing back the help of
a `isReindexingFinished` flag as it was prior to #55763.

Closes #56128

Backport of #56135
2020-05-05 10:48:42 +03:00
Albert Zaharovits e8763bad41
Let realms gracefully terminate the authN chain (#55623)
AuthN realms are ordered as a chain so that the credentials of a given
user are verified in succession. Upon the first successful verification,
the user is authenticated. Realms do however have the option to cut short
this iterative process, when the credentials don't verify and the user
cannot exist in any other realm. This mechanism is currently used by
the Reserved and the Kerberos realm.

This commit improves the early termination operation by allowing
realms to gracefully terminate authentication, as if the chain has been
tried out completely. Previously, early termination resulted in an
authentication error which varies the response body compared
to the failed authentication outcome where no realm could verify the
credentials successfully.

Reserved users are hence denied authentication in exactly the same
way as other users are when no realm can validate their credentials.
2020-05-05 10:11:49 +03:00
Martijn van Groningen 2ac32db607
Move includeDataStream flag from IndicesOptions to IndexNameExpressionResolver.Context (#56151)
Backport of #56034.

Move includeDataStream flag from an IndicesOptions to IndexNameExpressionResolver.Context
as a dedicated field that callers to IndexNameExpressionResolver can set.

Also alter indices stats api to support data streams.
The rollover api uses this api and otherwise rolling over data stream does no longer work.

Relates to #53100
2020-05-04 22:38:33 +02:00
Dan Hermann 9892813842
[7.x] Delay warning about missing x-pack (#56142)
* Delay warning about missing x-pack (#54265)

Currently, when monitoring is enabled in a freshly-installed cluster,
the non-master nodes log a warning message indicating that master may
not have x-pack installed. The message is often printed even when the
master does have x-pack installed but takes some time to setup the local
exporter for monitoring. This commit adds the local exporter setting
`wait_master.timeout` which defaults to 30 seconds. The setting
configures the time that the non-master nodes should wait for master to
setup monitoring. After the time elapses, they log a message to the user
about possible missing x-pack installation on master.

The logging of this warning was moved from `resolveBulk()` to
`openBulk()` since `resolveBulk()` is called only on cluster updates and
the message might not be logged until a new cluster update occurs.

Closes #40898
2020-05-04 14:16:18 -05:00
Benjamin Trent 6c26de444d
[ML] reduce InferenceProcessor.Factory log spam by not parsing pipelines (#56020) (#56126)
If there are ill-formed pipelines, or other pipelines are not ready to be parsed, `InferenceProcessor.Factory::accept(ClusterState)` logs warnings. This can be confusing and cause log spam.

It might lead folks to think there an issue with the inference processor. Also, they would see logs for the inference processor even though they might not be using the inference processor. Leading to more confusion.

Additionally, pipelines might not be parseable in this method as some processors require the new cluster state metadata before construction (e.g. `enrich` requires cluster metadata to be set before creating the processor).

closes https://github.com/elastic/elasticsearch/issues/55985
2020-05-04 13:32:01 -04:00
Martijn van Groningen 6d03081560
Add auto create action (#56122)
Backport of #55858 to 7.x branch.

Currently the TransportBulkAction detects whether an index is missing and
then decides whether it should be auto created. The coordination of the
index creation also happens in the TransportBulkAction on the coordinating node.

This change adds a new transport action that the TransportBulkAction delegates to
if missing indices need to be created. The reasons for this change:

* Auto creation of data streams can't occur on the coordinating node.
Based on the index template (v2) either a regular index or a data stream should be created.
However if the coordinating node is slow in processing cluster state updates then it may be
unaware of the existence of certain index templates, which then can load to the
TransportBulkAction creating an index instead of a data stream. Therefor the coordination of
creating an index or data stream should occur on the master node. See #55377

* From a security perspective it is useful to know whether index creation originates from the
create index api or from auto creating a new index via the bulk or index api. For example
a user would be allowed to auto create an index, but not to use the create index api. The
auto create action will allow security to distinguish these two different patterns of
index creation.
This change adds the following new transport actions:

AutoCreateAction, the TransportBulkAction redirects to this action and this action will actually create the index (instead of the TransportCreateIndexAction). Later via #55377, can improve the AutoCreateAction to also determine whether an index or data stream should be created.

The create_index index privilege is also modified, so that if this permission is granted then a user is also allowed to auto create indices. This change does not yet add an auto_create index privilege. A future change can introduce this new index privilege or modify an existing index / write index privilege.

Relates to #53100
2020-05-04 19:10:09 +02:00
Julie Tibshirani 6b5cf1b031 For constant_keyword, make sure exists query handles missing values. (#55757)
It's possible for a constant_keyword to have a 'null' value before any documents
are seen that contain a value for the field. In this case, no documents have a
value for the field, and 'exists' queries should return no documents.
2020-05-04 09:41:52 -07:00
Ross Wolf 6da686c7e0
EQL: Add match function implementation (#55182)
* EQL: Add Match function
* EQL: Add note about character classes
* EQL: QueryFolderFailTests.java
* EQL: Add match() fail tests
* EQL: Add match tests and fix alias
* EQL: Add match verifier failure tests
* EQL: Reorder query folder fail tests
2020-05-04 09:34:20 -06:00
Dimitris Athanasiou 76fa5a2397
[7.x][ML] Improve cleanup for DF Analytics HLRC tests (#56101) (#56109)
Adds the step of stopping all data frame analytics before
deleting them to the cleanup of the corresponding HLRC tests.

Closes #56097

Backport of #56101
2020-05-04 16:08:08 +03:00
Andrei Stefan 5d1bc6c89c
EQL: reject queries that use a nested field or a sub-field of a nested field (#56108)
* Reject queries that act on nested fields or fields with nested field types in their hierarchy (#55721)

(cherry picked from commit 2a024461cd9da821112953d4c6e565ea622c678b)
2020-05-04 15:50:31 +03:00
Przemysław Witek 44f5a8ccd3
Use snapshot's latest result time rather than snapshot's creation time when creating an annotation (#56093) (#56103) 2020-05-04 12:36:12 +02:00
Christos Soulios c65f828cb7
[7.x] Histogram field type support for ValueCount and Avg aggregations (#56099)
Backports #55933 to 7.x

Implements value_count and avg aggregations over Histogram fields as discussed in #53285

- value_count returns the sum of all counts array of the histograms
- avg computes a weighted average of the values array of the histogram by multiplying each value with its associated element in the counts array
2020-05-04 13:23:02 +03:00
Armin Braun 0860d1dc74
Remove Dead Code in SLM Delete Handling (#56081) (#56098)
The delete response is always acknowledged. No need to handle anything else.
2020-05-04 12:22:06 +02:00
Armin Braun e01b999ef0
Add Functionality to Consistently Read RepositoryData For CS Updates (#55773) (#56091)
Using optimistic locking, add the ability to run a repository state
update task with a consistent view of the current repository data.
Allows for a follow-up to remove the snapshot INIT state.
2020-05-04 08:13:14 +02:00
David Roberts 31e32aa420
[TEST] Allow more warnings about multiple template matches (#56085)
Adds some extra allowed warnings about multiple index templates
matching on index creation of the same type that were added
in #56038.
2020-05-03 21:07:51 +01:00
Armin Braun 3a64ecb6bf
Allow Deleting Multiple Snapshots at Once (#55474) (#56083)
* Allow Deleting Multiple Snapshots at Once (#55474)

Adds deleting multiple snapshots in one go without significantly changing the mechanics of snapshot deletes otherwise.
This change does not yet allow mixing snapshot delete and abort. Abort is still only allowed for a single snapshot delete by exact name.
2020-05-03 20:30:58 +02:00
William Brafford d53c941c41
Make xpack.monitoring.enabled setting a no-op (#55617) (#56061)
* Make xpack.monitoring.enabled setting a no-op

This commit turns xpack.monitoring.enabled into a no-op. Mostly, this involved
removing the setting from the setup for integration tests. Monitoring may
introduce some complexity for test setup and teardown, so we should keep an eye
out for turbulence and failures

* Docs for making deprecated setting a no-op
2020-05-01 16:42:11 -04:00
Andrei Stefan fbba65d8b3
SQL: SubSelect unresolved bugfix (#55956) (#56055)
* Resolve the missing refs only after the aggregate tree is resolved

(cherry picked from commit 10167b1cf2df6b074a1ba0c8e73c261ff9e9d1db)
2020-05-01 07:48:11 +03:00
Ryan Ernst 52b9d8d15e
Convert remaining license methods to isAllowed (#55908) (#55991)
This commit converts the remaining isXXXAllowed methods to instead of
use isAllowed with a Feature value. There are a couple other methods
that are static, as well as some licensed features that check the
license directly, but those will be dealt with in other followups.
2020-04-30 15:52:22 -07:00
Igor Motov d8f9df771d
Expose agg usage in Feature Usage API (#55732) (#56048)
Counts usage of the aggs and exposes them on the _nodes/usage/.

Closes #53746
2020-04-30 12:53:36 -04:00
Przemko Robakowski 797f63e743
[7.x] Emit deprecation warning if multiple v1 templates match with a new index (#55558) (#56038)
* Emit deprecation warning if multiple v1 templates match with a new index (#55558)

* Emit deprecation warning if multiple v1 templates match with a new index

* DEPRECATION_LOGGER rename
2020-04-30 17:36:17 +02:00
Luca Cavanna fc6422ffcc Consolidate DelayableWriteable (#55932)
This commit includes a number of minor improvements around `DelayableWriteable`: javadocs were expanded and reworded, `get` was renamed to `expand` and `DelayableWriteable` no longer implements `Supplier`. Also a couple of methods are now private instead of package private.
2020-04-30 17:16:58 +02:00
Benjamin Trent c36bcb4dd0
[ML] fixing file structure finder multiline merge max for delimited formats (#56023) (#56035)
This commit correctly sets the maxLinesPerRow in the CsvPreference for delimited files given the file structure finder settings.

Previously, it was silently ignored.
2020-04-30 10:51:32 -04:00
Benjamin Trent 04b1f6498b
[ML] using new fixed interval in ml tests (#56021) (#56031)
This commit removes deprecated references to DateHistogram.interval from ml tests
2020-04-30 10:26:39 -04:00
Dimitris Athanasiou 17b904def5
[7.x][ML] Decouple DFA progress testing from analyses phases (#55925) (#56024)
This refactors native integ tests to assert progress without
expecting explicit phases for analyses. We can test those with
yaml tests in a single place.

Backport of #55925
2020-04-30 17:05:47 +03:00
William Brafford 273ff6a105
Make xpack.ilm.enabled setting a no-op (#55592) (#55980)
* Make xpack.ilm.enabled setting a no-op

* Add watcher setting to not use ILM

* Update documentation for no-op setting

* Remove NO_ILM ml index templates

* Remove unneeded setting from test setup

* Inline variable definitions for ML templates

* Use identical parameter names in templates

* New ILM/watcher setting falls back to old setting

* Add fallback unit test for watcher/ilm setting
2020-04-30 09:50:18 -04:00
David Kyle c204353249
[ML] Wait for model loaded and cached in ModelLoadingServiceTests (#56014)
Fixes test by exposing the method ModelLoadingService::addModelLoadedListener() 
so that the test class can be notified when a model is loaded which happens in
a background thread
2020-04-30 13:32:07 +01:00
Yang Wang 317d9fb88f
Remove synthetic role names of API keys as they confuse users (#56005) (#56011)
Synthetic role names of API keys add confusion to users. This happens to API responses as well as audit logs. The PR removes them for clarity.
2020-04-30 21:32:55 +10:00
Hendrik Muhs d3bcef2962
[7.x][Transform] implement throttling in indexer (#55011) (#56002)
implement throttling in async-indexer used by rollup and transform. The added
docs_per_second parameter is used to calculate a delay before the next
search request is send. With re-throttle its possible to change the parameter
at runtime. When stopping a running job, its ensured that despite throttling
the indexer stops in reasonable time. This change contains the groundwork, but
does not expose the new functionality.

relates #54862
backport: #55011
2020-04-30 11:20:35 +02:00
Ioannis Kakavas 3c7c9573b4
Fix PemKeyConfigTests (#55577) (#55996)
We were creating PemKeyConfig objects using different private
keys but always using testnode.crt certificate that uses the
RSA public key. The PemKeyConfig was built but we would
then later fail to handle SSL connections during the TLS
handshake eitherway.
This became obvious in FIPS tests where the consistency
checks that FIPS 140 mandates kick in and failed early
becausethe private key was of different type than the
public key
2020-04-30 12:05:27 +03:00
Yang Wang 84a2f1adf2
Resolve anonymous roles and deduplicate roles during authentication (#53453) (#55995)
Anonymous roles resolution and user role deduplication are now performed during authentication instead of authorization. The change ensures:

* If anonymous access is enabled, user will be able to see the anonymous roles added in the roles field in the /_security/_authenticate response.
* Any duplication in user roles are removed and will not show in the above authenticate response.
* In any other case, the response is unchanged.

It also introduces a behaviour change: the anonymous role resolution is now authentication node specific, previously it was authorization node specific. Details can be found at #47195 (comment)
2020-04-30 17:34:14 +10:00
Lisa Cawley 006e00ed0a
[DOCS] Adds documentation for secondary authorization headers (#55365) (#55986) 2020-04-29 16:29:38 -07:00
Lisa Cawley 5100fd7eb2
[DOCS] Add token based authn documentation (#55957) 2020-04-29 14:47:02 -07:00
Christos Soulios 43dab77186
[7.x] Modified searchAndReduce() to return empty agg when no docs exist (#55967)
Backports #55826 to 7.x

    Modified AggregatorTestCase.searchAndReduce() method so that it returns an empty aggregation result when no documents have been inserted.

    Also refactored several aggregation tests so they do not re-implement method AggregatorTestCase.testCase()

    Fixes #55824
2020-04-30 00:28:32 +03:00
jimczi 86ee8974d0 Revert "Mute failing tests in AsyncSearchActionIT"
This reverts commit 2fe4801ca1.
2020-04-29 22:22:21 +02:00
Mark Vieira 2fe4801ca1
Mute failing tests in AsyncSearchActionIT 2020-04-29 10:59:10 -07:00
Dimitris Athanasiou c5aa281171
[7.x][ML] Remove error on parsing progress for unknown phase in DFA (#55926) (#55954)
On second thought, this check does not seem to be adding value.
We can test that the phases are as we expect them for each analysis
by adding yaml tests. Those would fail if we introduce new phases
from c++ accidentally or without coordination. This would achieve
the same thing. At the same time we would not have to comment out
this code each time a new phase is introduced. Instead we can just
temporarily mute those yaml tests. Note I will add those tests
right after the imminent new phases are added to the c++ side.

Backport of #55926
2020-04-29 20:11:33 +03:00
Benjamin Trent edd049f9cd
[ML] Allow a certain number of ill-formatted rows when delimited format is specified (#55735) (#55944)
While it is good to not be lenient when attempting to guess the file format, it is frustrating to users when they KNOW it is CSV but there are a few ill-formatted rows in the file (via some entry error, etc.).

This commit allows for up to 10% of sample rows to be considered "bad". These rows are effectively ignored while guessing the format.

This percentage of "allows bad rows" is only applied when the user has specified delimited formatting options. As the structure finder needs some guidance on what a "bad row" actually means.

related to https://github.com/elastic/elasticsearch/issues/38890
2020-04-29 11:15:21 -04:00
Jim Ferenczi 293c81dd59 Fix AsyncSearchActionIT#testTermsAggregation (#55924)
This commit fixes the initialization of total hits
in the async search response.

Relates #55683
Closes #55920
2020-04-29 15:44:10 +02:00
Jake Landis ae4d980c8c
[7.x] json spec - add description for autoscaling (#55748) (#55901) 2020-04-29 08:40:11 -05:00
Andrei Dan 6a0e1e161b
ILM stop step execution if writeIndex is false (#54805) (#55923)
(cherry picked from commit 47a9fd760f7bf2cc6cd778485dc057b6aaf07709)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
2020-04-29 13:39:37 +01:00
Christos Soulios 02bf0c586a
[7.x] Histogram field type support for Sum aggregation (#55916)
Implements Sum aggregation over Histogram fields by summing the value of each bucket multiplied by their count as requested in #53285

Backports #55681 to 7.x
2020-04-29 15:06:12 +03:00
David Roberts 6ad497bfda Muting AsyncSearchActionIT.testTermsAggregation
Due to https://github.com/elastic/elasticsearch/issues/55920
2020-04-29 12:34:47 +01:00
Dimitris Athanasiou d9685a0f19
[7.x][ML] Validate at least one feature is available for DF analytics (#55876) (#55914)
We were previously checking at least one supported field existed
when the _explain API was called. However, in the case of analyses
with required fields (e.g. regression) we were not accounting that
the dependent variable is not a feature and thus if the source index
only contains the dependent variable field there are no features to
train a model on.

This commit adds a validation that at least one feature is available
for analysis. Note that we also move that validation away from
`ExtractedFieldsDetector` and the _explain API and straight into
the _start API. The reason for doing this is to allow the user to use
the _explain API in order to understand why they would be seeing an
error like this one.

For example, the user might be using an index that has fields but
they are of unsupported types. If they start the job and get
an error that there are no features, they will wonder why that is.
Calling the _explain API will show them that all their fields are
unsupported. If the _explain API was failing instead, there would
be no way for the user to understand why all those fields are
ignored.

Closes #55593

Backport of #55876
2020-04-29 11:39:58 +03:00
David Roberts 61ac09ae21
[ML] Add daily_model_snapshot_retention_after_days to job config (#55891)
This change adds a new setting, daily_model_snapshot_retention_after_days,
to the anomaly detection job config.

Initially this has no effect, the effect will be added in a followup PR.
This PR gets the complexities of making changes that interact with BWC
over well before feature freeze.

Backport of #55878
2020-04-29 09:12:53 +01:00
Nik Everett a5d0409a8f
Save memory in on aggs in async search (#55683) (#55879)
This replaces a reference to the result of partially reducing
aggregations that async search keeps with a reference to the serialized
form of the result of the partial reduction which we need to keep
anyway.
2020-04-28 16:23:30 -04:00
Larry Gregory 47d252424b
Backport: Deprecate the kibana reserved user (#54967) (#55822) 2020-04-28 10:30:25 -04:00
Christos Soulios fae9ec13dd
Removed ValuesSourceRegistry.registerAny() (#55846)
* Backports #55747 to 7.x
* All ValuesSourceTypes must be registered
explicitly
* Removed lambdas in ValuesSourceRegistry
2020-04-28 15:44:42 +03:00
Adrien Grand 58c3bb5ae1
Repurpose `ignore_throttled` to be only about frozen indices. (#55047) (#55852)
This has no practical impact on users since frozen indices are the only
throttled indices today. However this has an impact on upcoming features
that would use search throttling.

Filtering out throttled indices made sense a couple years ago, but as
we're now improving support for slow requests with `_async_search` and
exploring ways to reduce storage costs, this feature has most likely
become a trap, that we'd like to not have with upcoming features that
would use search throttling.

Relates #54058
2020-04-28 14:31:54 +02:00
David Turner 3f2d10d8fc Permit searches to be concurrent to prewarming (#55795)
Today when prewarming a searchable snapshot we use the `SparseFileTracker` to
lock each (part of a) snapshotted blob, blocking any other readers from
accessing this data until the whole part is available.

This commit changes this strategy: instead we optimistically start to download
the blob without any locking, and then lock much smaller ranges after each
individual `read()` call. This may mean that some bytes are downloaded twice,
but reduces the time that other readers may need to wait before the data they
need is available.

As a best-effort optimisation we try to request the smallest possible single
range of missing bytes in the part by first checking how many of the initial
and terminal bytes of the part are already present in cache. In particular if
the part is already fully cached before prewarming then this check means we
skip the part entirely.
2020-04-28 10:44:05 +01:00
Tim Brooks 80662f31a1
Introduce mechanism to stub request handling (#55832)
Currently there is a clear mechanism to stub sending a request through
the transport. However, this is limited to testing exceptions on the
sender side. This commit reworks our transport related testing
infrastructure to allow stubbing request handling on the receiving side.
2020-04-27 16:57:15 -06:00
Tal Levy 6ba5148ead
Add geo_shape support for the geo_centroid aggregation (#55602) (#55819)
this commit leverages the new geo_shape doc values
to register a new geo_centroid aggregator that works
on geo_shape field.
2020-04-27 12:16:10 -07:00
Ioannis Kakavas ca5d677130
Mute-55816 (#55818)
See #55816
2020-04-27 21:26:02 +03:00
Hendrik Muhs 4b93f17b24 [Transform] improve TransformRestTestCase robustness (#55786)
handles/retries temporary SearchPhaseExecutionErrors

fixes #54810
2020-04-27 17:17:53 +02:00
Jake Landis 6f392cf5b9
[7.x] json spec - add description for searchable snapshots (#55746) (#55809) 2020-04-27 10:08:09 -05:00
Mark Tozzi 22a98ec279
Aggregation support for Value Scripts that change types (#54830) (#55752) 2020-04-27 09:57:05 -04:00
Dimitris Athanasiou abab4c4d4f
[7.x][ML] Do not fail DFA task when it's stopped whilst reindexing (#55797) (#55800)
Adding to #55659, we missed another way we could set the task to
failed due to task cancellation. CI revealed that we might also
get a `SearchPhaseExecutionException` whose cause is a
`TaskCancelledException`. That exception is not wrapped so
unwrapping it will not return the underlying `TaskCancelledException`.
Thus to be complete in catching this, we also need to check the
error's cause.

Closes #55068

Backport of #55797
2020-04-27 16:03:57 +03:00
Dimitris Athanasiou 7f100c1196
[7.x][ML] Allow analytics process define its own progress phases (#55763) (#55791)
This is a continuation from #55580.

Now that we're parsing phase progresses from the analytics process
we change `ProgressTracker` to allow for custom phases between
the `loading_data` and `writing_results` phases. Each `DataFrameAnalysis`
may declare its own phases.

This commit sets things in place for the analytics process to start
reporting different phases per analysis type. However, this is
still preserving existing behaviour as all analyses currently
declare a single `analyzing` phase.

Backport of #55763
2020-04-27 13:30:05 +03:00
Ioannis Kakavas d56f25acb4
Validate hashing algorithm in users tool (#55628) (#55734)
This change adds validation when running the users tool so that
if Elasticsearch is expected to run in a JVM that is configured to
be in FIPS 140 mode and the password hashing algorithm is not
compliant, we would throw an error.
Users tool uses the configuration from the node and this validation
would also happen upon node startup but users might be added in the
file realm before the node is started and we would have the
opportunity to notify the user of this misconfiguration.
The changes in #55544 make this much less probable to happen in 8
since the default algorithm will be compliant but this change can
act as a fallback in anycase and makes for a better user experience.
2020-04-27 12:23:41 +03:00
Ioannis Kakavas 38b55f06ba
Fix concurrent refresh of tokens (#55114) (#55733)
Our handling for concurrent refresh of access tokens suffered from
a race condition where:

1. Thread A has just finished with updating the existing token
document, but hasn't stored the new tokens in a new document
yet
2. Thread B attempts to refresh the same token and since the
original token document is marked as refreshed, it decrypts and
gets the new access token and refresh token and returns that to
the caller of the API.
3. The caller attempts to use the newly refreshed access token
immediately and gets an authentication error since thread A still
hasn't finished writing the document.

This commit changes the behavior so that Thread B, would first try
to do a Get request for the token document where it expects that
the access token it decrypted is stored(with exponential backoff )
and will not respond until it can verify that it reads it in the
tokens index. That ensures that we only ever return tokens in a
response if they are already valid and can be used immediately

It also adjusts TokenAuthIntegTests
to test authenticating with the tokens each thread receives,
which would fail without the fix.

Resolves: #54289
2020-04-27 12:23:17 +03:00
David Roberts 3ba44a5af8
[ML] Adding failed_category_count to model_size_stats (#55761)
The failed_category_count statistic records the number of times
categorization wanted to create a new category but couldn't
because the job had reached its model_memory_limit.

Backport of #55716
2020-04-25 10:36:49 +01:00
Aleksandr Maus ad54cca823
EQL: implement math functions: add, divide, module, multiply, subtract (#55137) (#55737)
* EQL: implement math functions: add, divide, module, multiply, subtract
2020-04-24 15:52:27 -04:00
James Rodewig c1b0548db0
[DOCS] Document EQL search REST API (#52384) 2020-04-24 15:36:01 -04:00
Nick Knize b0e8a8a4d1
[Backport] Refactor Spatial Field Mappers (#55696)
This commit refactors all spatial Field Mappers to a common
AbstractGeometryFieldMapper that implements shared parameter functionality
(e.g., ignore_malformed, ignore_z_value) and provides a common framework for
overriding type parsing, and building in xpack. Common shape functionality is
implemented in a new AbstractShapeGeometryFieldMapper that is reused and
overridden in GeoShapeFieldMapper, GeoShapeFieldMapperWithDocValues,
LegacyGeoShapeFieldMapper, and ShapeFieldMapper. This abstraction provides a
reusable foundation for adding new xpack features; such as coordinate reference
system support.
2020-04-24 14:05:16 -05:00
Mark Tozzi 87b4979c24
[7.x] Make ValuesSourceRegistry immutable after initilization #55493 (#55697) 2020-04-24 13:33:38 -04:00
Jason Tedor 22a8b60187
Reduce code duplication in CCR non-compliance tests
This commit removes some code duplication in the CCR non-compliance
tests by refactoring an assertion method so that it can be used in both
tests that are present there.
2020-04-24 13:24:56 -04:00
Tanguy Leroux 41ddbd4188 Allow to prewarm the cache for searchable snapshot shards (#55322)
Relates #50999
2020-04-24 18:03:34 +02:00
Dimitris Athanasiou 210b7f1b76
[7.x][ML] Remove parsing of old progress format in DF Analytics (#55711) (#55720)
Since #55580 we've introduced a new format for parsing progress
from the data frame analytics process. As the process is now
writing out progress in this new way, we can remove the parsing
of the old format.

Backport of #55711
2020-04-24 16:50:56 +03:00
David Turner aa9a2bce37 Avoid accidental contiguous read (#55713)
If we choose to read from two random positions that are 1024 bytes apart then
this counts as a contiguous read for stats purposes, failing this test. This
commit ensures that we always perform a non-contiguous read.
2020-04-24 11:50:31 +01:00
David Turner de30550aea Relax elapsed time stats assertion (#55710)
`SearchableSnapshotDirectoryStatsTests#testCachedBytesReadsAndWrites` asserts
that each write takes one clock tick, but we now permit concurrent reads and
writes so each write might take longer. This commit relaxes the assertion to
match.

Closes #55707
2020-04-24 10:21:08 +01:00
Przemysław Witek c89917c799
Register DFA jobs on putAnalytics rather than via a separate method (#55458) (#55708) 2020-04-24 10:59:32 +02:00
Dimitris Athanasiou b8379872a7
[7.x][ML] Logs error when DFA task is set to failed (#55545) (#55668)
Also unmutes the integ test that stops and restarts
an outlier detection job with the hope of learning more
of the failure in #55068.

Backport of #55545

Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
2020-04-24 11:06:07 +03:00
Jim Ferenczi 0a6c74b7d3 AsyncSearchMaintenanceService should stop when closing a node (#55651)
This change turns the AsyncSearchMaintenanceService into an
AbstractLifecycleComponent and ensures that the service is
stopped when a node is closing.

Closes #55646
2020-04-24 09:38:40 +02:00
Hendrik Muhs b213209f0c [Rollup] improve stopping tests (#55666)
improve tests related to stopping using a client that answers and can be
synchronized with the test thread in order to test special situations

relates #55011
2020-04-24 08:48:36 +02:00
Jay Modi 30f8c326fe
Test: fix SSLReloadDuringStartupIntegTests (#55637)
This commit fixes reproducible test failures with the
SSLReloadDuringStartupIntegTests on the 7.x branch. The failures only
occur on 7.x due to the existence of the transport client and its usage
in our test infrastructure. This change removes the randomized usage of
transport clients when retrieving a client from a node in the internal
cluster. Transport clients do not support the reloading of files for
TLS configuration changes but if we build one from the nodes settings
and attempt to use it after the files have been changed, the client
will not know about the changes and the TLS connection will fail.

Closes #55524
2020-04-23 21:36:43 -06:00
Ryan Ernst 97c4b64fb1
Add isAllowed license utility (#55424) (#55700)
License state is currently made up of boolean methods that check whether
a particular feature is allowed by the current license state. Each new
feature must copy/past boiler plate code. While that has gotten easier
with utilities like isAllowedByLicense, this is still more cumbersome
than should be necessary. This commit adds a general purpose isAllowed
method which takes a new Feature enum, where each value of the enum
defines the minimum license mode and whether the license must be active
to be allowed. Only security features are converted in this PR, in order
to keep the commit size relatively small. The rest of the features will
be converted in a followup.
2020-04-23 16:28:28 -07:00
Zachary Tong 715c90bf7d Aggs must specify a `field` or `script` (or both) (#52226)
This adds a validation to VSParserHelper to ensure that a field or
script or both are specified by the user.  This is technically
required today already, but throws an exception much deeper
in the agg framework and has a very unintuitive error for the user
(as well as eating more resources instead of failing early)
2020-04-23 19:23:41 -04:00
jimczi c857adf603 Fix AsyncSearchTaskTests#testWithFetchFailures
Fix usage of a possible invalid random range [1, 0].

Relates #55688
2020-04-24 00:45:17 +02:00
Jim Ferenczi 31d1727698 Fix (de)serialization of async search failures (#55688)
The (de)serialization code of the async search response
cannot handle exceptions that extend ElasticsearchException (e.g. ScriptException).
This commit fixes this bug by serializing the error with the more generic
StreamInput#writeException.
2020-04-24 00:44:43 +02:00
Igor Motov 8c7ef2417f
Make AsyncSearchIndexService reusable (#55598)
EQL will require very similar functionality to async search. This PR refactors
AsyncSearchIndexService to make it reusable for EQL.

Supersedes #55119
Relates to #49638
2020-04-23 18:02:17 -04:00
Nick Knize 96a02089c2
Refactor GeoShape DocValues in spatial xpack (#55691)
This commit refactors geo_shape doc values, fielddata, and utility classes from
the single mapper package in x-pack spatial plugin to a package structure that
is consistent with the server module.
2020-04-23 15:32:23 -05:00
David Roberts 46be9959a0
[ML] Audit when unassigned datafeeds are stopped (#55667)
Previously audit messages were indexed when datafeeds that were
assigned to a node were stopped, but not datafeeds that were
unassigned at the time they were stopped.

This change adds auditing for the unassigned case.

Backport of #55656
2020-04-23 20:46:35 +01:00
Dan Hermann dd5c96c2ed
[7.x] Rollover for data streams 2020-04-23 12:04:34 -05:00
Zachary Tong 4f483ac370 Fix half-float range in SupportedTypeTests (#55409)
Also adds a comment to the half-float number field type tests indicating
why 70000 is used instead of 65504
2020-04-23 11:36:37 -04:00
Dimitris Athanasiou 4b11adf074
[7.x][ML] Do not fail DFA task that is stopped during reindexing (#55659) (#55663)
While we were catching `TaskCancelledException` while we wait for
reindexing to complete, we missed the fact that this exception
may be wrapped in a multi-node cluster. This is the reason
we may still fail the task when stop is called while reindexing.

Some times we're lucky and the exception is thrown by the same
node that runs the job. Then the exception is not wrapped and
things work fine. But when that is not the case the exception is
wrapped, we fail to catch it, and set the task to failed.

The fix is to simply unwrap the exception when we check it it
is `TaskCancelledException`.

Closes #55068

Backport of #55659
2020-04-23 15:57:01 +03:00