Commit Graph

3891 Commits

Author SHA1 Message Date
Martijn van Groningen 8a4eefdd83
Expose enrich stats api to monitoring. (#46708)
This change also slightly modifies the stats response,
so that is can easier consumer by monitoring and other
users. (coordinators stats are now in a list instead of
a map and has an additional field for the node id)

Relates to #32789
2019-09-26 11:04:33 +02:00
James Baiera 9967aff714 Add notice to Enrich index mapping metadata (#45996) 2019-09-24 12:55:11 -04:00
James Baiera a349b22273 Add the cluster version to enrich policies (#45021)
Adds the Elasticsearch version as a field on the EnrichPolicy object
2019-09-23 18:44:45 -04:00
Martijn van Groningen 33bbc4798b
fixed compile errors after merging 2019-09-23 09:46:14 +02:00
Martijn van Groningen 0cfddca61d
Merge remote-tracking branch 'es/7.x' into enrich-7.x 2019-09-23 09:46:05 +02:00
Martijn van Groningen bf42789eb6
fixed compile error 2019-09-22 21:20:12 +02:00
Lisa Cawley 875d864be6
[DOCS] Update data frame transform URLs (#46940) (#46946) 2019-09-20 15:57:43 -07:00
Michael Basnight f1c7ed647b Allow comma separated ids in get enrich policy API (#46351)
This commit changes the GET REST api so it will accept an optional comma
separated list of enrich policy ids. This change also modifies the
behavior of the GET API in that it will not error if it is passed a bad
enrich id anymore, but will instead just return an empty list.
2019-09-20 10:06:58 -05:00
Hendrik Muhs 4a2cb05162 add message about transform disabled if license is missing (#46901)
adds a message for transform about what happens if no license has been activated
2019-09-20 13:47:40 +02:00
Hendrik Muhs abe889af75
[7.5][Transform] rename classes in transform plugin (#46867)
rename classes and settings in transform plugin, provide BWC for old settings
2019-09-20 10:43:00 +02:00
Jason Tedor bd77626177
Add the ability to require an ingest pipeline (#46847)
This commit adds the ability to require an ingest pipeline on an
index. Today we can have a default pipeline, but that could be
overridden by a request pipeline parameter. This commit introduces a new
index setting index.required_pipeline that acts similarly to
index.default_pipeline, except that it can not be overridden by a
request pipeline parameter. Additionally, a default pipeline and a
request pipeline can not both be set. The required pipeline can be set
to _none to ensure that no pipeline ever runs for index requests on that
index.
2019-09-19 16:37:45 -04:00
Yannick Welsch 9638ca20b0 Allow dropping documents with auto-generated ID (#46773)
When using auto-generated IDs + the ingest drop processor (which looks to be used by filebeat
as well) + coordinating nodes that do not have the ingest processor functionality, this can lead
to a NullPointerException.

The issue is that markCurrentItemAsDropped() is creating an UpdateResponse with no id when
the request contains auto-generated IDs. The response serialization is lenient for our
REST/XContent format (i.e. we will send "id" : null) but the internal transport format (used for
communication between nodes) assumes for this field to be non-null, which means that it can't
be serialized between nodes. Bulk requests with ingest functionality are processed on the
coordinating node if the node has the ingest capability, and only otherwise sent to a different
node. This means that, in order to reproduce this, one needs two nodes, with the coordinating
node not having the ingest functionality.

Closes #46678
2019-09-19 16:46:33 +02:00
Armin Braun 6b09c2cdbb
Limit Netty Workers in NativeRealmIntegTestCase (#46816) (#46850)
The fact that this test randomly uses a relatively large number
of nodes and hence Netty worker threads created a problem with
running out of direct memory on CI.
Tests run with 512M heap (and hence 512M direct memory) by default.
On a CI worker with 16 cores, this means Netty will by default set
up 32 transport workers. If we get unlucky and a lot of them
actually do work (and thus instantiate a `CopyBytesSocketChannel`
which costs 1M per thread for the thread-local IO buffer) we
would run out of memory.

This specific failure was only seen with `NativeRealmIntegTests` so I
only added the constraint on the Netty worker count here.
We can add it to other tests (or `SecurityIntegTestCase`) if need be
but for now it doesn't seem necessary so I opted for least impact.

Closes #46803
2019-09-19 13:07:42 +02:00
Dimitris Athanasiou 02a5e153dc
[7.x][ML] Parse and index data frame analytics state (#46804) (#46820)
This commit reuses the same state processor that is used for autodetect
to parse state output from data frame analytics jobs. We then index the
state document into the state index.

Backport of #46804
2019-09-18 20:37:40 +03:00
Benjamin Trent 9cf9c64ec2
[7.x] [ML][Transforms] remove `force` flag from _start (#46414) (#46748)
* [ML][Transforms] remove `force` flag from _start (#46414)

* [ML][Transforms] remove `force` flag from _start

* fixing expected error message

* adjusting bwc version
2019-09-18 10:06:05 -04:00
Dimitris Athanasiou cebe8da617
[7.x][ML] MlMemoryTracker should ignore analytics tasks without config (#46789) (#46811)
It is possible for a running analytics job that its config is removed
from the '.ml-config' index (perhaps the user deleted the entire index,
etc.). In that case the task remains without a matching config. I have
raised #46781 to discuss how to deal with this issue.

This commit focuses on `MlMemoryTracker` and changes it so that when
we get the configs for the running tasks we leniently ignore missing ones.
This at least means memory tracking will keep working for other jobs
if one or more are missing.

In addition, this commit makes the cleanup code for native analytics
tests more robust by explicitly stopping all jobs and force-stopping
if an error occurs. This helps so that a single failing test does
not cause other tests fail due to pending tasks.

Backport of #46789
2019-09-18 16:35:25 +03:00
Alpar Torok f3e67bdd17 Add resolution rule to allow resolving all deps (#46768)
Since the `resolveAllDependencies` task resolves all the congfigurations
it can find, this was not caught by our testing, but it's required to be
configuraed specifically.

We should probably cut-over to the new configurations at some point to
avoid problems like this.

Closes elastic/infra#14580
2019-09-18 11:09:43 +03:00
Lee Hinman b85468d6ea
Add node setting for disabling SLM (#46794) (#46796)
This adds the `xpack.slm.enabled` setting to allow disabling of SLM
functionality as well as its HTTP API endpoints.

Relates to #38461
2019-09-17 17:39:41 -06:00
Oliver Gupte cbd58d3b78
Give kibana user privileges to create APM agent config index (#46765) (#46792)
* Give kibana user reserved role privileges on .apm-* to create APM agent configuration index.

* fixed test to include checking all .apm-* permissions

* changed pattern from ".apm-*" to the more specific ".apm-agent-configuration"
2019-09-17 15:01:42 -07:00
Costin Leau 92e518e789 SQL: Properly handle indices with no/empty mapping (#46775)
When encountering only indices with empty mapping, the IndexResolver
throws an exception as it expects to find at least one entry.
This commit fixes this case so that an empty mapping is returned.

Fix #46757

(cherry picked from commit 5f4f5807acb93b5fab36718c092c328977a396b6)
2019-09-17 16:01:22 +03:00
Armin Braun b0f09b279f
Make Snapshot Logic Write Metadata after Segments (#45689) (#46764)
* Write metadata during snapshot finalization after segment files to prevent outdated metadata in case of dynamic mapping updates as explained in #41581
* Keep the old behavior of writing the metadata beforehand in the case of mixed version clusters for BwC reasons
   * Still overwrite the metadata in the end, so even a mixed version cluster is fixed by this change if a newer version master does the finalization
* Fixes #41581
2019-09-17 13:09:39 +02:00
Tomas Della Vedova e1cf103980
Fixes for API specification (#46522) (#46736)
Follow-up of #42346
2019-09-17 11:49:24 +02:00
Costin Leau 683b5fdeca SQL: Support queries with HAVING over SELECT (#46709)
Handle queries with implicit GROUP BY where the aggregation is not in
the projection/SELECT but inside the filter/HAVING such as:

SELECT 1 FROM x HAVING COUNT(*) > 0

The engine now properly identifies the case and handles it accordingly.

Fix #37051

(cherry picked from commit fa53ca05d8219c27079b50b4a5b7aeb220c7cde2)
2019-09-17 11:14:39 +03:00
Costin Leau 90f4c2379b SQL: improve ResultSet behavior when no rows are available (#46753)
Improve the defensive behavior of ResultSet when dealing with incorrect
API usage. In particular handle the case of dealing with no row
available (either because the cursor is before the first entry or after
the last).

Fix #46750

(cherry picked from commit 58fa38e4606625962e879265d35eacb0960c6cdb)
2019-09-17 11:14:38 +03:00
Przemysław Witek e49be611ad
[7.x] Add audit messages for Data Frame Analytics (#46521) (#46738) 2019-09-16 21:21:38 +02:00
Benjamin Trent 92acc732de
[ML][Transform] Use field caps for mapping deductino (#46703) (#46742) 2019-09-16 10:05:55 -04:00
Andrei Stefan 40e9353947 SQL: use the correct data type for types conversion (#46574)
(cherry picked from commit 3e25db2f302c3aafe27e4d8d4fb1743401d85e6d)
2019-09-16 15:36:17 +03:00
Hendrik Muhs c8f52ec4ff
[Transform] Rename data frame plugin to transform: classes in xpack.core (#46644) (#46734)
rename classes in xpack.core of transform plugin from "data frame transform" to "transform"
2019-09-16 13:39:22 +02:00
Andrei Dan c57cca98b2
[ILM] Add date setting to calculate index age (#46561) (#46697)
* [ILM] Add date setting to calculate index age

Add the `index.lifecycle.origination_date` to allow users to configure a
custom date that'll be used to calculate the index age for the phase
transmissions (as opposed to the default index creation date).

This could be useful for users to create an index with an "older"
origination date when indexing old data.

Relates to #42449.

* [ILM] Don't override creation date on policy init

The initial approach we took was to override the lifecycle creation date
if the `index.lifecycle.origination_date` setting was set. This had the
disadvantage of the user not being able to update the `origination_date`
anymore once set.

This commit changes the way we makes use of the
`index.lifecycle.origination_date` setting by checking its value when
we calculate the index age (ie. at "read time") and, in case it's not
set, default to the index creation date.

* Make origination date setting index scope dynamic

* Document orignation date setting in ilm settings

(cherry picked from commit d5bd2bb77ee28c1978ab6679f941d7c02e389d32)
Signed-off-by: Andrei Dan <andrei.dan@elastic.co>
2019-09-16 08:50:28 +01:00
Dimitris Athanasiou 63eb0d9081
[7.x][ML] Avoid marking data frame analytics task completed twice (#46721) (#46724)
When the stop API is called while the task is running there is
a chance the task gets marked completed twice. This may cause
undesired side effects, like indexing the progress document a second
time after the stop API has returned (the cause for #46705).

This commit adds a check that the task has not been completed before
proceeding to mark it so. In addition, when we update the task's state
we could get some warnings that the task was missing if the stop API
has been called in the meantime. We now check the errors are
`ResourceNotFoundException` and ignore them if so.

Closes #46705

Backports #46721
2019-09-15 17:25:26 +03:00
Hendrik Muhs e1842c0e5a
[7.x][Transforms] backport BWC tests for transforms crud (#46452)
backport 8.0 transform tests to 7.x
2019-09-14 13:06:48 +02:00
Lisa Cawley c0a16047fa [DOCS] Updates links to reporting content (#46717) 2019-09-13 11:40:07 -07:00
James Rodewig 2831535cf9 [DOCS] Replace "// CONSOLE" comments with [source,console] (#46679) 2019-09-13 11:44:54 -04:00
Nhat Nguyen cabff5a7cd Handle lower retaining seqno retention lease error (#46420)
We renew the CCR retention lease at a fixed interval, therefore it's
possible to have more than one in-flight renewal requests at the same
time. If requests arrive out of order, then the assertion is violated.

Closes #46416
Closes #46013
2019-09-13 08:50:19 -04:00
Dimitris Athanasiou 0bc8acaf5b
[7.x][ML] Create state index and alias before starting an analytics job (#46602) (#46648)
This is fixing a bug where if an analytics job is started before any
anomaly detection job is opened, we create an index after the state
write alias.

Instead, we should create the state index and alias before starting
an analytics job and this commit makes sure this is the case.

Backport of #46602
2019-09-13 10:34:12 +03:00
Lisa Cawley dae5b22bf8 [DOCS] Fixes link to Kibana security (#46690) 2019-09-12 16:30:43 -07:00
Przemysław Witek 5b1f6669ff
Do not wait for the old notifications index (".ml-notifications"). It is no longer used. (#46657) (#46666) 2019-09-12 21:47:25 +02:00
Luca Cavanna e57756492a Update http-core and http-client dependencies (#46549)
Relates to #45808
Closes #45577
2019-09-12 09:45:29 +02:00
Lisa Cawley ec5592ed76 [DOCS] Adds missing icons to Watcher HLRC APIs (#46626) 2019-09-11 16:35:15 -07:00
Zachary Tong 6dc8ed5d57
[7.x Backport] Refactor AllocatedPersistentTask#init(), move rollup ctor logic (#46406)
This makes the AllocatedPersistentTask#init() method protected so that
implementing classes can perform their initialization logic there,
instead of the constructor.  Rollup's task is adjusted to use this
init method.

It also slightly refactors the methods to se a static logger in the
AllocatedTask instead of passing it in via an argument.  This is
simpler, logged messages come from the task instead of the
service, and is easier for tests
2019-09-11 17:00:28 -04:00
James Rodewig f9bf10f2b6
[DOCS] Change "a SSL" to "an SSL" in the Java docs (#46524) (#46618) 2019-09-11 15:55:57 -04:00
Marios Trivyzas d956509394 SQL: Implement DATE_TRUNC function (#46473)
DATE_TRUNC(<truncate field>, <date/datetime>) is a function that allows
the user to truncate a timestamp to the specified field by zeroing out
the rest of the fields. The function is implemented according to the
spec from PostgreSQL: https://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-TRUNC

Closes: #46319
(cherry picked from commit b37e96712db1aace09f17b574eb02ff6b942a297)
2019-09-11 21:41:02 +03:00
Ryan Ernst 86290cb3d9 Make reuse of sql test code explicit (#45884)
The sql project uses a common set of security tests, which are run in
subprojects. Currently these are shared through a shared directory, but
this is not setup correctly to ensure it is built before tests run. This
commit changes the test classes to be an artifact of the sql/qa/security
project and makes the test runner use the built artifact (a directory of
classes) for tests.

closes #45866
2019-09-11 10:56:07 -07:00
Lee Hinman 09a9cefaa0 Handle partial failure retrieving segments in SegmentCountStep (#46556)
Since the `IndicesSegmentsRequest` scatters to all shards for the index,
it's possible that some of the shards may fail. This adds failure
handling and logging (since this is a best-effort step in the first
place) for this case.
2019-09-11 10:29:31 -06:00
Marios Trivyzas 0963e78164 SQL: Fix issue with common type resolution (#46565)
Many scalar functions try to find out the common type between their
arguments in order to set it as their return time, e.g.:
for `float + double` the common type which is set as the return type
of the + operation is `double`.

Previously, for data types TEXT and KEYWORD (string data types) there
was no common data type found and null was returned causing NPEs when
the function was trying to resolve the return data type.

Fixes: #46551
(cherry picked from commit 291017d69dfc810707c3c7c692f5a50af431b790)
2019-09-11 19:10:15 +03:00
Lee Hinman 52d7b03b49 Wait for no snapshots in state in testRetentionWhileSnapshotIn… (#46573)
This commit adds a wait/check for all running snapshots to be cleared
before taking another snapshot. The previous snapshot was successful but
had not yet been cleared from the cluster state, so the second snapshot
failed due to a `ConcurrentSnapshotException`.

Resolves #46508
2019-09-11 09:47:01 -06:00
David Roberts 461de5b58e [TEST] Remove incorrect data frame analytics state assertion (#46597)
After starting the analytics job and checking its state
the state can be any of "started", "reindexing" or
"analyzing" depending on how quickly the work is done.
2019-09-11 16:33:14 +01:00
David Roberts 07a0140260
[ML-DataFrame] Ensure latest index template exists before indexing docs (#46595)
When upgrading data nodes to a newer version before
master nodes there was a risk that a transform running
on an upgraded data node would index a document into
the new transforms internal index before its index
template was created.  This would cause the index to
be created with entirely dynamic mappings.

This change introduces a check before indexing any
internal transforms document to ensure that the required
index template exists and create it if it doesn't.

Backport of #46553
2019-09-11 16:27:26 +01:00
Jim Ferenczi 23bf310c84 Replace the SearchContext with QueryShardContext when building aggregator factories (#46527)
This commit replaces the `SearchContext` with the `QueryShardContext` when building aggregator factories. Aggregator factories are part of the `SearchContext` so they shouldn't require a `SearchContext` to create them.
The main changes here are the signatures of `AggregationBuilder#build` that now takes a `QueryShardContext` and `AggregatorFactory#createInternal` that passes the `SearchContext` to build the `Aggregator`.

Relates #46523
2019-09-11 16:43:30 +02:00
Hendrik Muhs efea581dcc
[7.x][Transform]Rename data frame plugin to transform: plugin and package names (#46583)
rename data frame transform plugin to transform:

 - rename plugin data-frame to transform
 - change all package names from o.e.*.dataframe.* to o.e.*.transform.*
 - necessary changes to fix loading/testing
2019-09-11 14:50:08 +02:00