Commit Graph

1528 Commits

Author SHA1 Message Date
Jim Ferenczi 23bf310c84 Replace the SearchContext with QueryShardContext when building aggregator factories (#46527)
This commit replaces the `SearchContext` with the `QueryShardContext` when building aggregator factories. Aggregator factories are part of the `SearchContext` so they shouldn't require a `SearchContext` to create them.
The main changes here are the signatures of `AggregationBuilder#build` that now takes a `QueryShardContext` and `AggregatorFactory#createInternal` that passes the `SearchContext` to build the `Aggregator`.

Relates #46523
2019-09-11 16:43:30 +02:00
Hendrik Muhs efea581dcc
[7.x][Transform]Rename data frame plugin to transform: plugin and package names (#46583)
rename data frame transform plugin to transform:

 - rename plugin data-frame to transform
 - change all package names from o.e.*.dataframe.* to o.e.*.transform.*
 - necessary changes to fix loading/testing
2019-09-11 14:50:08 +02:00
Armin Braun 41633cb9b5
More Efficient Ordering of Shard Upload Execution (#42791) (#46588)
* More Efficient Ordering of Shard Upload Execution (#42791)
* Change the upload order of of snapshots to work file by file in parallel on the snapshot pool instead of merely shard-by-shard
* Inspired by #39657
* Cleanup BlobStoreRepository Abort and Failure Handling (#46208)
2019-09-11 13:59:20 +02:00
Martijn van Groningen a4b0f66919
Add enrich stats api (#46462)
The enrich api returns enrich coordinator stats and
information about currently executing enrich policies.

The coordinator stats include per ingest node:
* The current number of search requests in the queue.
* The total number of outstanding remote requests that
  have been executed since node startup. Each remote
  request is likely to include multiple search requests.
  This depends on how much search requests are in the
  queue at the time when the remote request is performed.
* The number of current outstanding remote requests.
* The total number of search requests that `enrich`
  processors have executed since node startup.

The current execution policies stats include:
* The name of policy that is executing
* A full blow task info object that is executing the policy.

Relates to #32789
2019-09-11 13:40:24 +02:00
Jim Ferenczi 425b1a77e8 Add more context to QueryShardContext (#46584)
This change adds an IndexSearcher and the node's BigArrays in the QueryShardContext.
It's a spin off of #46527 as this change is required to allow aggregation builder to solely use the
query shard context.

Relates #46523
2019-09-11 12:24:51 +02:00
Przemysław Witek e38e631dac
[7.x] Implement DataFrameAnalyticsAuditMessage and DataFrameAnalyticsAuditor (#45967) (#46519) 2019-09-11 12:17:26 +02:00
Lee Hinman cdc3a260af
Add retention to Snapshot Lifecycle Management (backport of #4… (#46506)
* Add retention to Snapshot Lifecycle Management (#46407)

This commit adds retention to the existing Snapshot Lifecycle Management feature (#38461) as described in #43663. This allows a user to configure SLM to automatically delete older snapshots based on a number of criteria.

An example policy would look like:

```
PUT /_slm/policy/snapshot-every-day
{
  "schedule": "0 30 2 * * ?",
  "name": "<production-snap-{now/d}>",
  "repository": "my-s3-repository",
  "config": {
    "indices": ["foo-*", "important"]
  },
  // Newly configured retention options
  "retention": {
    // Snapshots should be deleted after 14 days
    "expire_after": "14d",
    // Keep a maximum of thirty snapshots
    "max_count": 30,
    // Keep a minimum of the four most recent snapshots
    "min_count": 4
  }
}
```

SLM Retention is run on a scheduled configurable with the `slm.retention_schedule` setting, which supports cron expressions. Deletions are run for a configurable time bounded by the `slm.retention_duration` setting, which defaults to 1 hour.

Included in this work is a new SLM stats API endpoint available through

``` json
GET /_slm/stats
```

That returns statistics about snapshot taken and deleted, as well as successful retention runs, failures, and the time spent deleting snapshots. #45362 has more information as well as an example of the output. These stats are also included when retrieving SLM policies via the API.

* Add base framework for snapshot retention (#43605)

* Add base framework for snapshot retention

This adds a basic `SnapshotRetentionService` and `SnapshotRetentionTask`
to start as the basis for SLM's retention implementation.

Relates to #38461

* Remove extraneous 'public'

* Use a local var instead of reading class var repeatedly

* Add SnapshotRetentionConfiguration for retention configuration (#43777)

* Add SnapshotRetentionConfiguration for retention configuration

This commit adds the `SnapshotRetentionConfiguration` class and its HLRC
counterpart to encapsulate the configuration for SLM retention.
Currently only a single parameter is supported as an example (we still
need to discuss the different options we want to support and their
names) to keep the size of the PR down. It also does not yet include version serialization checks
since the original SLM branch has not yet been merged.

Relates to #43663

* Fix REST tests

* Fix more documentation

* Use Objects.equals to avoid NPE

* Put `randomSnapshotLifecyclePolicy` in only one place

* Occasionally return retention with no configuration

* Implement SnapshotRetentionTask's snapshot filtering and delet… (#44764)

* Implement SnapshotRetentionTask's snapshot filtering and deletion

This commit implements the snapshot filtering and deletion for
`SnapshotRetentionTask`. Currently only the expire-after age is used for
determining whether a snapshot is eligible for deletion.

Relates to #43663

* Fix deletes running on the wrong thread

* Handle missing or null policy in snap metadata differently

* Convert Tuple<String, List<SnapshotInfo>> to Map<String, List<SnapshotInfo>>

* Use the `OriginSettingClient` to work with security, enhance logging

* Prevent NPE in test by mocking Client

* Allow empty/missing SLM retention configuration (#45018)

Semi-related to #44465, this allows the `"retention"` configuration map
to be missing.

Relates to #43663

* Add min_count and max_count as SLM retention predicates (#44926)

This adds the configuration options for `min_count` and `max_count` as
well as the logic for determining whether a snapshot meets this criteria
to SLM's retention feature.

These options are optional and one, two, or all three can be specified
in an SLM policy.

Relates to #43663

* Time-bound deletion of snapshots in retention delete function (#45065)

* Time-bound deletion of snapshots in retention delete function

With a cluster that has a large number of snapshots, it's possible that
snapshot deletion can take a very long time (especially since deletes
currently have to happen in a serial fashion). To prevent snapshot
deletion from taking forever in a cluster and blocking other operations,
this commit adds a setting to allow configuring a maximum time to spend
deletion snapshots during retention. This dynamic setting defaults to 1
hour and is best-effort, meaning that it doesn't hard stop a deletion
at an hour mark, but ensures that once the time has passed, all
subsequent deletions are deferred until the next retention cycle.

Relates to #43663

* Wow snapshots suuuure can take a long time.

* Use a LongSupplier instead of actually sleeping

* Remove TestLogging annotation

* Remove rate limiting

* Add SLM metrics gathering and endpoint (#45362)

* Add SLM metrics gathering and endpoint

This commit adds the infrastructure to gather metrics about the different SLM actions that a cluster
takes. These actions are stored in `SnapshotLifecycleStats` and perpetuated in cluster state. The
stats stored include the number of snapshots taken, failed, deleted, the number of retention runs,
as well as per-policy counts for snapshots taken, failed, and deleted. It also includes the amount
of time spent deleting snapshots from SLM retention.

This commit also adds an endpoint for retrieving all stats (further commits will expose this in the
SLM get-policy API) that looks like:

```
GET /_slm/stats
{
  "retention_runs" : 13,
  "retention_failed" : 0,
  "retention_timed_out" : 0,
  "retention_deletion_time" : "1.4s",
  "retention_deletion_time_millis" : 1404,
  "policy_metrics" : {
    "daily-snapshots2" : {
      "snapshots_taken" : 7,
      "snapshots_failed" : 0,
      "snapshots_deleted" : 6,
      "snapshot_deletion_failures" : 0
    },
    "daily-snapshots" : {
      "snapshots_taken" : 12,
      "snapshots_failed" : 0,
      "snapshots_deleted" : 12,
      "snapshot_deletion_failures" : 6
    }
  },
  "total_snapshots_taken" : 19,
  "total_snapshots_failed" : 0,
  "total_snapshots_deleted" : 18,
  "total_snapshot_deletion_failures" : 6
}
```

This does not yet include HLRC for this, as this commit is quite large on its own. That will be
added in a subsequent commit.

Relates to #43663

* Version qualify serialization

* Initialize counters outside constructor

* Use computeIfAbsent instead of being too verbose

* Move part of XContent generation into subclass

* Fix REST action for master merge

* Unused import

*  Record history of SLM retention actions (#45513)

This commit records the deletion of snapshots by the retention component
of SLM into the SLM history index for the purposes of reviewing operations
taken by SLM and alerting.

* Retry SLM retention after currently running snapshot completes (#45802)

* Retry SLM retention after currently running snapshot completes

This commit adds a ClusterStateObserver to wait until the currently
running snapshot is complete before proceeding with snapshot deletion.
SLM retention waits for the maximum allowed deletion time for the
snapshot to complete, however, the waiting time is not factored into
the limit on actual deletions.

Relates to #43663

* Increase timeout waiting for snapshot completion

* Apply patch

From 2374316f0d.patch

* Rename test variables

* [TEST] Be less strict for stats checking

* Skip SLM retention if ILM is STOPPING or STOPPED (#45869)

This adds a check to ensure we take no action during SLM retention if
ILM is currently stopped or in the process of stopping.

Relates to #43663

* Check all actions preventing snapshot delete during retention (#45992)

* Check all actions preventing snapshot delete during retention run

Previously we only checked to see if a snapshot was currently running,
but it turns out that more things can block snapshot deletion. This
changes the check to be a check for:

- a snapshot currently running
- a deletion already in progress
- a repo cleanup in progress
- a restore currently running

This was found by CI where a third party delete in a test caused SLM
retention deletion to throw an exception.

Relates to #43663

* Add unit test for okayToDeleteSnapshots

* Fix bug where SLM retention task would be scheduled on every node

* Enhance test logging

* Ignore if snapshot is already deleted

* Missing import

* Fix SnapshotRetentionServiceTests

* Expose SLM policy stats in get SLM policy API (#45989)

This also adds support for the SLM stats endpoint to the high level rest client.

Retrieving a policy now looks like:

```json
{
  "daily-snapshots" : {
    "version": 1,
    "modified_date": "2019-04-23T01:30:00.000Z",
    "modified_date_millis": 1556048137314,
    "policy" : {
      "schedule": "0 30 1 * * ?",
      "name": "<daily-snap-{now/d}>",
      "repository": "my_repository",
      "config": {
        "indices": ["data-*", "important"],
        "ignore_unavailable": false,
        "include_global_state": false
      },
      "retention": {}
    },
    "stats": {
      "snapshots_taken": 0,
      "snapshots_failed": 0,
      "snapshots_deleted": 0,
      "snapshot_deletion_failures": 0
    },
    "next_execution": "2019-04-24T01:30:00.000Z",
    "next_execution_millis": 1556048160000
  }
}
```

Relates to #43663

* Rewrite SnapshotLifecycleIT as as ESIntegTestCase (#46356)

* Rewrite SnapshotLifecycleIT as as ESIntegTestCase

This commit splits `SnapshotLifecycleIT` into two different tests.
`SnapshotLifecycleRestIT` which includes the tests that do not require
slow repositories, and `SLMSnapshotBlockingIntegTests` which is now an
integration test using `MockRepository` to simulate a snapshot being in
progress.

Relates to #43663
Resolves #46205

* Add error logging when exceptions are thrown

* Update serialization versions

* Fix type inference

* Use non-Cancellable HLRC return value

* Fix Client mocking in test

* Fix SLMSnapshotBlockingIntegTests for 7.x branch

* Update SnapshotRetentionTask for non-multi-repo snapshot retrieval

* Add serialization guards for SnapshotLifecyclePolicy
2019-09-10 09:08:09 -06:00
Michael Basnight 9304f5c889 Ensure enrich executes on master node only (#46448)
The previous transport action was a read action, which under the right
set of circumstances can execute on a coordinating node. This commit
ensures that cannot happen.
2019-09-10 09:59:36 -05:00
Martijn van Groningen c057fce978
Merge remote-tracking branch 'es/7.x' into enrich-7.x 2019-09-09 08:40:54 +02:00
Benjamin Trent 457ff3e2fb
7.x/ml fix instance serialization bwc (#46404)
* [ML] Fixing instance serialization version for bwc

* fixing CppLogMessage
2019-09-05 13:23:26 -05:00
Martijn van Groningen ded98e50b7
Change exact match processor to match processor. (#46041)
Besides a rename, this changes allows to processor to attach multiple
enrich docs to the document being ingested.

Also in order to control the maximum number of enrich docs to be
included in the document being ingested, the `max_matches` setting
is added to the enrich processor.

Relates #32789
2019-09-04 18:05:12 +02:00
Andrey Ershov ece9eb4acd Remove stack trace logging in Security(Transport|Http)ExceptionHandler (#45966)
As per #45852 comment we no longer need to log stack-traces in
SecurityTransportExceptionHandler and SecurityHttpExceptionHandler even
if trace logging is enabled.

(cherry picked from commit c99224a32d26db985053b7b36e2049036e438f97)
2019-09-04 11:50:35 +03:00
Benjamin Trent 53df54c703
[ML][Transforms] fixing stop on changes check bug (#46162) (#46273)
* [ML][Transforms] fixing stop on changes check bug

* Adding new method finishAndCheckState to cover race conditions in early terminations

* changing stopping conditions in `onStart`

* allow indexer to finish when exiting early
2019-09-03 11:04:18 -05:00
Lee Hinman 3d4b8e01c7
Validate SLM policy ids strictly (#45998) (#46145)
This uses strict validation for SLM policy ids, similar to what we use
for index names.

Resolves #45997
2019-09-03 09:20:02 -06:00
David Roberts ab045744ac [ML-DataFrame] Fix off-by-one error in checkpoint operations_behind (#46235)
Fixes a problem where operations_behind would be one less than
expected per shard in a new index matched by the data frame
transform source pattern.

For example, if a data frame transform had a source of foo*
and a new index foo-new was created with 2 shards and 7 documents
indexed in it then operations_behind would be 5 prior to this
change.

The problem was that an empty index has a global checkpoint
number of -1 and the sequence number of the first document that
is indexed into an index is 0, not 1.  This doesn't matter for
indices included in both the last and next checkpoints, as the
off-by-one errors cancelled, but for a new index it affected
the observed result.
2019-09-03 12:45:02 +01:00
Martijn van Groningen 555b630160
Merge remote-tracking branch 'es/7.x' into enrich-7.x 2019-09-02 09:16:55 +02:00
Benjamin Trent d0c5573a51
[ML] Throw an error when a datafeed needs CCS but it is not enabled for the node (#46044) (#46096)
Though we allow CCS within datafeeds, users could prevent nodes from accessing remote clusters. This can cause mysterious errors and difficult to troubleshoot.

This commit adds a check to verify that `cluster.remote.connect` is enabled on the current node when a datafeed is configured with a remote index pattern.
2019-08-30 09:27:07 -05:00
Dimitris Athanasiou 5921ae53d8
[7.x][ML] Regression dependent variable must be numeric (#46072) (#46136)
* [ML] Regression dependent variable must be numeric

This adds a validation that the dependent variable of a regression
analysis must be numeric.

* Address review comments and fix some problems

In addition to addressing the review comments, this
commit fixes a few issues I found during testing.

In particular:

- if there were mappings for required fields but they were
not included we were not reporting the error
- if explicitly included fields had unsupported types we were
not reporting the error

Unfortunately, I couldn't get those fixed without refactoring
the code in `ExtractedFieldsDetector`.
2019-08-30 09:57:43 +03:00
Zachary Tong cf8a4171e1
Rename `data-science` plugin to `analytics` (#46133)
Rename `data-science` plugin to `analytics`. 
Also removes enabled flag. Backport of #46092
2019-08-29 12:45:39 -04:00
Michael Basnight 51a703da29
Add enrich transport client support (#46002)
This commit adds an enrich client, as well as a smoke test to validate
the client works.
2019-08-29 09:10:07 -05:00
Przemysław Witek b8a0379057
Refactor auditor-related classes (#45893) (#46120) 2019-08-29 14:21:03 +02:00
Gordon Brown 47bbd9d9a9
[7.x] Fix rollover alias in SLM history index template (#46001)
This commit adds the `rollover_alias` setting required for ILM to work
correctly to the SLM history index template and adds assertions to the
SLM integration tests to ensure that it works correctly.
2019-08-28 14:50:22 -07:00
Martijn van Groningen 1157224a6b
Merge remote-tracking branch 'es/7.x' into enrich-7.x 2019-08-28 10:14:07 +02:00
Jake Landis 154d1dd962
Watcher max_iterations with foreach action execution (#45715) (#46039)
Prior to this commit the foreach action execution had a hard coded 
limit to 100 iterations. This commit allows the max number of 
iterations to be a configuration ('max_iterations') on the foreach 
action. The default remains 100.
2019-08-27 16:57:20 -05:00
Armin Braun fdef293c81 Fix RegressionTests#fromXContent (#46029)
* The `trainingPercent` must be between `1` and `100`, not `0` and `100` which is causing test failures
2019-08-27 18:24:26 +03:00
Dimitris Athanasiou 873ad3f942
[7.x][ML] Add option to regression to randomize training set (#45969) (#46017)
Adds a parameter `training_percent` to regression. The default
value is `100`. When the parameter is set to a value less than `100`,
from the rows that can be used for training (ie. those that have a
value for the dependent variable) we randomly choose whether to actually
use for training. This enables splitting the data into a training set and
the rest, usually called testing, validation or holdout set, which allows
for validating the model on data that have not been used for training.

Technically, the analytics process considers as training the data that
have a value for the dependent variable. Thus, when we decide a training
row is not going to be used for training, we simply clear the row's
dependent variable.
2019-08-27 17:53:11 +03:00
Yogesh Gaikwad 7b6246ec67
Add `manage_own_api_key` cluster privilege (#45897) (#46023)
The existing privilege model for API keys with privileges like
`manage_api_key`, `manage_security` etc. are too permissive and
we would want finer-grained control over the cluster privileges
for API keys. Previously APIs created would also need these
privileges to get its own information.

This commit adds support for `manage_own_api_key` cluster privilege
which only allows api key cluster actions on API keys owned by the
currently authenticated user. Also adds support for retrieval of
the API key self-information when authenticating via API key
without the need for the additional API key privileges.
To support this privilege, we are introducing additional
authentication context along with the request context such that
it can be used to authorize cluster actions based on the current
user authentication.

The API key get and invalidate APIs introduce an `owner` flag
that can be set to true if the API key request (Get or Invalidate)
is for the API keys owned by the currently authenticated user only.
In that case, `realm` and `username` cannot be set as they are
assumed to be the currently authenticated ones.

The changes cover HLRC changes, documentation for the API changes.

Closes #40031
2019-08-28 00:44:23 +10:00
Dimitris Athanasiou dd6c13fdf9
[ML] Add description to DF analytics (#45774) (#46019) 2019-08-27 15:48:59 +03:00
Albert Zaharovits 1ebee5bf9b
PKI realm authentication delegation (#45906)
This commit introduces PKI realm delegation. This feature
supports the PKI authentication feature in Kibana.

In essence, this creates a new API endpoint which Kibana must
call to authenticate clients that use certificates in their TLS
connection to Kibana. The API call passes to Elasticsearch the client's
certificate chain. The response contains an access token to be further
used to authenticate as the client. The client's certificates are validated
by the PKI realms that have been explicitly configured to permit
certificates from the proxy (Kibana). The user calling the delegation
API must have the delegate_pki privilege.

Closes #34396
2019-08-27 14:42:46 +03:00
Zachary Tong 943a016bb2
Add Cumulative Cardinality agg (and Data Science plugin) (#45990)
This adds a pipeline aggregation that calculates the cumulative
cardinality of a field.  It does this by iteratively merging in the
HLL sketch from consecutive buckets and emitting the cardinality up
to that point.

This is useful for things like finding the total "new" users that have
visited a website (as opposed to "repeat" visitors).

This is a Basic+ aggregation and adds a new Data Science plugin
to house it and future advanced analytics/data science aggregations.
2019-08-26 16:19:55 -04:00
Andrey Ershov 479ab9b8db Fix plaintext on TLS port logging (#45852)
Today if non-TLS record is received on TLS port generic exception will
be logged with the stack-trace.
SSLExceptionHelper.isNotSslRecordException method does not work because
it's assuming that NonSslRecordException would be top-level.
This commit addresses the issue and the log would be more concise.

(cherry picked from commit 6b83527bf0c23d4d5b97fab7f290c43432945d4f)
2019-08-26 12:32:35 +02:00
Ioannis Kakavas 2bee27dd54
Allow Transport Actions to indicate authN realm (#45946)
This commit allows the Transport Actions for the SSO realms to
indicate the realm that should be used to authenticate the
constructed AuthenticationToken. This is useful in the case that
many authentication realms of the same type have been configured
and where the caller of the API(Kibana or a custom web app) already
know which realm should be used so there is no need to iterate all
the realms of the same type.
The realm parameter is added in the relevant REST APIs as optional
so as not to introduce any breaking change.
2019-08-25 19:36:41 +03:00
Dimitris Athanasiou be554fe5f0
[7.x][ML] Improve progress reportings for DF analytics (#45856) (#45910)
Previously, the stats API reports a progress percentage
for DF analytics tasks that are running and are in the
`reindexing` or `analyzing` state.

This means that when the task is `stopped` there is no progress
reported. Thus, one cannot distinguish between a task that never
run to one that completed.

In addition, there are blind spots in the progress reporting.
In particular, we do not account for when data is loaded into the
process. We also do not account for when results are written.

This commit addresses the above issues. It changes progress
to being a list of objects, each one describing the phase
and its progress as a percentage. We currently have 4 phases:
reindexing, loading_data, analyzing, writing_results.

When the task stops, progress is persisted as a document in the
state index. The stats API now reports progress from in-memory
if the task is running, or returns the persisted document
(if there is one).
2019-08-23 23:04:39 +03:00
Martijn van Groningen cb42e19a32
Change how type is stored in an enrich policy. (#45789)
A policy type controls how the enrich index is created and
the query executed against the match field. Currently there
is a single policy type (`exact_match`). In the near future
more policy types will be added and different policy may have
different configuration options.

For this reason type should be a json object instead of a string field:

```
{
   "exact_match": {
      ...
   }
}
```

instead of:

```
{
  "type": "exact_match",
  ...
}
```

This will make streaming parsing of enrich policies easier as in the
new format, the parsing code can know ahead what configuration fields
to expect. In the latter format that is not possible if the type field
appears not as the first field.

Relates to #32789
2019-08-23 13:43:38 +02:00
Martijn van Groningen 837cfa2640
Merge remote-tracking branch 'es/7.x' into enrich-7.x 2019-08-23 11:22:27 +02:00
markharwood 217e41ab6c
Search - added HLRC support for PinnedQueryBuilder (#45779) (#45853)
Added HLRC support for PinnedQueryBuilder

Related  #44074
2019-08-23 09:22:17 +01:00
Tim Vernum f94e4a9151
Set security index refresh interval to 1s (#45888)
The security indices were being created without specifying the
refresh interval, which means it would inherit a value from any
templates that exists.
However, certain security functionality depends on being able to
wait_for refresh, and causes errors (e.g. in Kibana) if that time
exceeds 30s.

This commit changes the security indices configuration to always be
created with a 1s refresh interval. This prevents any templates from
inadvertantly interfering with the proper functioning of security.
It is possible for an administrator to explicitly change the refresh
interval after the indices have been created.

Backport of: #45434
2019-08-23 12:41:37 +10:00
Tim Vernum 029725fc35
Add SSL/TLS settings for watcher email (#45836)
This change adds a new SSL context

    xpack.notification.email.ssl.*

that supports the standard SSL configuration settings (truststore,
verification_mode, etc). This SSL context is used when configuring
outbound SMTP properties for watcher email notifications.

Backport of: #45272
2019-08-23 10:13:51 +10:00
Benjamin Trent 8e3c54fff7
[7.x] [ML] Adding data frame analytics stats to _usage API (#45820) (#45872)
* [ML] Adding data frame analytics stats to _usage API (#45820)

* [ML] Adding data frame analytics stats to _usage API

* making the size of analytics stats 10k

* adjusting backport
2019-08-22 15:15:41 -05:00
Benjamin Trent e50a78cf50
[ML-DataFrame] version data frame transform internal index (#45375) (#45837)
Adds index versioning for the internal data frame transform index. Allows for new indices to be created and referenced, `GET` requests now query over the index pattern and takes the latest doc (based on INDEX name).
2019-08-22 11:46:30 -05:00
Przemysław Witek 7512337922
[7.x] Allow the user to specify 'query' in Evaluate Data Frame request (#45775) (#45825) 2019-08-22 11:14:26 +02:00
Gordon Brown 47b1e2b3d0
[7.x] Use rollover for SLM's history indices (#45686)
Following our own guidelines, SLM should use rollover instead of purely
time-based indices to keep shard counts low. This commit implements lazy
index creation for SLM's history indices, indexing via an alias, and
rollover in the built-in ILM policy.
2019-08-21 13:42:11 -06:00
Martijn van Groningen 2677ac14d2
Merge remote-tracking branch 'es/7.x' into enrich-7.x 2019-08-21 14:28:17 +02:00
Przemysław Witek bf701b83d2
Shorten field names in EstimateMemoryUsageResponse (#45719) (#45772) 2019-08-21 12:45:09 +02:00
Dimitris Athanasiou d5c3d9b50f
[7.x][ML] Do not skip rows with missing values for regression (#45751) (#45754)
Regression analysis support missing fields. Even more, it is expected
that the dependent variable has missing fields to the part of the
data frame that is not for training.

This commit allows to declare that an analysis supports missing values.
For such analysis, rows with missing values are not skipped. Instead,
they are written as normal with empty strings used for the missing values.

This also contains a fix to the integration test.

Closes #45425
2019-08-21 08:15:38 +03:00
Benjamin Trent ba7b677618
[ML] better handle empty results when evaluating regression (#45745) (#45759)
* [ML] better handle empty results when evaluating regression

* adding new failure test to ml_security black list

* fixing equality check for regression results
2019-08-20 17:37:04 -05:00
Michael Basnight e3373d349b Consolidate enrich list all and get by name APIs (#45705)
The get and list APIs are a single API in this commit. Whether
requesting one named policy or all policies, a list of policies is
returened. The list API code has all been removed and the GET api is
what remains, which contains much of the list response code.
2019-08-20 10:29:59 -05:00
Przemysław Witek b37ebd1adf
Prepare the codebase for new Auditor subclasses (#45716) (#45731) 2019-08-20 16:03:50 +02:00
Przemysław Witek 80dd0a0948
Get rid of EstimateMemoryUsageRequest and EstimateMemoryUsageAction.Request. (#45718) (#45725) 2019-08-20 15:49:17 +02:00
Benjamin Trent 88641a08af
[ML][Data frame] fixing failure state transitions and race condition (#45627) (#45656)
* [ML][Data frame] fixing failure state transitions and race condition (#45627)

There is a small window for a race condition while we are flagging a task as failed.

Here are the steps where the race condition occurs:
1. A failure occurs
2. Before `AsyncTwoPhaseIndexer` calls the `onFailure` handler it does the following:
   a. `finishAndSetState()` which sets the IndexerState to STARTED
   b. `doSaveState(...)` which attempts to save the current state of the indexer
3. Another trigger is fired BEFORE `onFailure` can fire, but AFTER `finishAndSetState()` occurs.

The trick here is that we will eventually set the indexer to failed, but possibly not before another trigger had the opportunity to fire. This could obviously cause some weird state interactions. To combat this, I have put in some predicates to verify the state before taking actions. This is so if state is indeed marked failed, the "second trigger" stops ASAP.

Additionally, I move the task state checks INTO the `start` and `stop` methods, which will now require a `force` parameter. `start`, `stop`, `trigger` and `markAsFailed` are all `synchronized`. This should gives us some guarantees that one will not switch states out from underneath another.

I also flag the task as `failed` BEFORE we successfully write it to cluster state, this is to allow us to make the task fail more quickly. But, this does add the behavior where the task is "failed" but the cluster state does not indicate as much. Adding the checks in `start` and `stop` will handle this "real state vs cluster state" race condition. This has always been a problem for `_stop` as it is not a master node action and doesn’t always have the latest cluster state.

closes #45609

Relates to #45562

* [ML][Data Frame] moves failure state transition for MT safety (#45676)

* [ML][Data Frame] moves failure state transition for MT safety

* removing unused imports
2019-08-20 07:30:17 -05:00
Martijn van Groningen 5ea0985711
Merge remote-tracking branch 'es/7.x' into enrich-7.x 2019-08-16 09:47:11 +02:00
Benjamin Trent fde5dae387
[ML][Data Frame] adjusting change detection workflow (#45511) (#45580)
* [ML][Data Frame] adjusting change detection workflow

* adjusting for PR comment

* disallowing null as an argument value
2019-08-14 17:26:24 -05:00
Nick Knize 647a8308c3
[SPATIAL] Backport new ShapeFieldMapper and ShapeQueryBuilder to 7x (#45363)
* Introduce Spatial Plugin (#44389)

Introduce a skeleton Spatial plugin that holds new licensed features coming to 
Geo/Spatial land!

* [GEO] Refactor DeprecatedParameters in AbstractGeometryFieldMapper (#44923)

Refactor DeprecatedParameters specific to legacy geo_shape out of
AbstractGeometryFieldMapper.TypeParser#parse.

* [SPATIAL] New ShapeFieldMapper for indexing cartesian geometries (#44980)

Add a new ShapeFieldMapper to the xpack spatial module for
indexing arbitrary cartesian geometries using a new field type called shape.
The indexing approach leverages lucene's new XYShape field type which is
backed by BKD in the same manner as LatLonShape but without the WGS84
latitude longitude restrictions. The new field mapper builds on and
extends the refactoring effort in AbstractGeometryFieldMapper and accepts
shapes in either GeoJSON or WKT format (both of which support non geospatial
geometries).

Tests are provided in the ShapeFieldMapperTest class in the same manner
as GeoShapeFieldMapperTests and LegacyGeoShapeFieldMapperTests.
Documentation for how to use the new field type and what parameters are
accepted is included. The QueryBuilder for searching indexed shapes is
provided in a separate commit.

* [SPATIAL] New ShapeQueryBuilder for querying indexed cartesian geometry (#45108)

Add a new ShapeQueryBuilder to the xpack spatial module for
querying arbitrary Cartesian geometries indexed using the new shape field
type.

The query builder extends AbstractGeometryQueryBuilder and leverages the
ShapeQueryProcessor added in the previous field mapper commit.

Tests are provided in ShapeQueryTests in the same manner as
GeoShapeQueryTests and docs are updated to explain how the query works.
2019-08-14 16:35:10 -05:00
Benjamin Trent 0c343d8443
[7.x] [ML][Transforms] adjusting stats.progress for cont. transforms (#45361) (#45551)
* [ML][Transforms] adjusting stats.progress for cont. transforms (#45361)

* [ML][Transforms] adjusting stats.progress for cont. transforms

* addressing PR comments

* rename fix

* Adjusting bwc serialization versions
2019-08-14 13:08:27 -05:00
Martijn van Groningen 43b8ab607d
Improve naming of enrich policy fields. (#45494)
Renamed `enrich_key` to `match_field` and
renamed `enrich_values` to `enrich_fields`.

Relates #32789
2019-08-14 11:45:22 +02:00
Przemysław Witek df574e5168
[7.x] Implement ml/data_frame/analytics/_estimate_memory_usage API endpoint (#45188) (#45510) 2019-08-14 08:26:03 +02:00
Martijn van Groningen 0353eb9291
required changes after merging in upstream branch 2019-08-13 09:17:57 +02:00
Martijn van Groningen 1951cdf1cb
Merge remote-tracking branch 'es/7.x' into enrich-7.x 2019-08-13 09:12:31 +02:00
Yogesh Gaikwad 471d940c44
Refactor cluster privileges and cluster permission (#45265) (#45442)
The current implementations make it difficult for
adding new privileges (example: a cluster privilege which is
more than cluster action-based and not exposed to the security
administrator). On the high level, we would like our cluster privilege
either:
- a named cluster privilege
  This corresponds to `cluster` field from the role descriptor
- or a configurable cluster privilege
  This corresponds to the `global` field from the role-descriptor and
allows a security administrator to configure them.

Some of the responsibilities like the merging of action based cluster privileges
are now pushed at cluster permission level. How to implement the predicate
(using Automaton) is being now enforced by cluster permission.

`ClusterPermission` helps in enforcing the cluster level access either by
performing checks against cluster action and optionally against a request.
It is a collection of one or more permission checks where if any of the checks
allow access then the permission allows access to a cluster action.

Implementations of cluster privilege must be able to provide information
regarding the predicates to the cluster permission so that can be enforced.
This is enforced by making implementations of cluster privilege aware of
cluster permission builder and provide a way to specify how the permission is
to be built for a given privilege.

This commit renames `ConditionalClusterPrivilege` to `ConfigurableClusterPrivilege`.
`ConfigurableClusterPrivilege` is a renderable cluster privilege exposed
as a `global` field in role descriptor.

Other than this there is a requirement where we would want to know if a cluster
permission is implied by another cluster-permission (`has-privileges`).
This is helpful in addressing queries related to privileges for a user.
This is not just simply checking of cluster permissions since we do not
have access to runtime information (like request object).
This refactoring does not try to address those scenarios.

Relates #44048
2019-08-13 09:06:18 +10:00
Armin Braun a9e1402189
Remove Settings from BaseRestRequest Constructor (#45418) (#45429)
* Resolving the todo, cleaning up the unused `settings` parameter
* Cleaning up some other minor dead code in affected classes
2019-08-12 05:14:45 +02:00
Benjamin Trent fac1a6f8e8
[ML][Data Frame] have DataFrameTransformConfigUpdate#apply set Version (#45391) (#45400) 2019-08-09 14:32:49 -05:00
Hendrik Muhs bf4da6c6ad
[ML-DataFrame] fix starting a batch data frame after stopping at runtime (#45340) (#45381)
fix loading of next checkpoint after data frame transform has been stopped/started within one run

closes #45339
2019-08-09 20:30:11 +02:00
Dimitris Athanasiou 27497ff75f
[7.x][ML] Add regression analysis to DF analytics (#45292) (#45388)
This commit adds a first draft of a regression analysis
to data frame analytics. There is high probability that
the exact syntax might change.

This commit adds the new analysis type and its parameters as
well as appropriate validation. It also modifies the extractor
and the fields detector to be able to handle categorical fields
as regression analysis supports them.
2019-08-09 19:31:13 +03:00
David Roberts 14545f8958
[ML-DataFrame] Combine task_state and indexer_state in _stats (#45324)
This commit replaces task_state and indexer_state in the
data frame _stats output with a single top level state
that combines the two. It is defined as:

- failed if what's currently reported as task_state is failed
- stopped if there is no persistent task
- Otherwise what's currently reported as indexer_state

Backport of #45276
2019-08-08 16:24:26 +01:00
Martijn van Groningen 708f856940
Merge remote-tracking branch 'es/7.x' into enrich-7.x 2019-08-08 16:52:45 +02:00
Benjamin Trent 5db9982f71
[7.x] [ML][Data Frame] Add update transform api endpoint (#45154) (#45279)
* [ML][Data Frame] Add update transform api endpoint (#45154)

This adds the ability to `_update` stored data frame transforms. All mutable fields are applied when the next checkpoint starts. The exception being `description`.

This PR contains all that is necessary for this addition:
* HLRC
* Docs
* Server side
2019-08-07 10:37:35 -05:00
Lee Hinman c7ec0b8431 Include in-progress snapshot for a policy with get SLM policy… (#45245)
This commit adds the "in_progress" key to the SLM get policy API,
returning a policy that looks like:

```json
{
  "daily-snapshots" : {
    "version" : 1,
    "modified_date" : "2019-08-05T18:41:48.778Z",
    "modified_date_millis" : 1565030508778,
    "policy" : {
      "name" : "<production-snap-{now/d}>",
      "schedule" : "0 30 1 * * ?",
      "repository" : "repo",
      "config" : {
        "indices" : [
          "foo-*",
          "important"
        ],
        "ignore_unavailable" : true,
        "include_global_state" : false
      },
      "retention" : {
        "expire_after" : "10m"
      }
    },
    "last_success" : {
      "snapshot_name" : "production-snap-2019.08.05-oxctmnobqye3luim4uejhg",
      "time_string" : "2019-08-05T18:42:23.257Z",
      "time" : 1565030543257
    },
    "next_execution" : "2019-08-06T01:30:00.000Z",
    "next_execution_millis" : 1565055000000,
    "in_progress" : {
      "name" : "production-snap-2019.08.05-oxctmnobqye3luim4uejhg",
      "uuid" : "t8Idqt6JQxiZrzp0Vt7z6g",
      "state" : "STARTED",
      "start_time" : "2019-08-05T18:42:22.998Z",
      "start_time_millis" : 1565030542998
    }
  }
}
```

These are only visible while the snapshot is being taken (or failed),
since it reads from the cluster state rather than from the repository
itself.
2019-08-07 08:29:49 -06:00
Tom Callahan a7a419bee8 Change Ldap SDK License to LGPL-2.1 (#45116)
We currently use the unboundid ldap SDK, which is triply licensed under
GPL-2.0, LGPL-2.1, and the "UnboundID LDAP SDK Free Use License". We currently
identify the license as the latter, but LGPL-2.1 is the one we should be using
per our policy.
2019-08-06 16:48:09 -04:00
Zachary Tong 422aca9a5d Fix Rollup job creation to work with templates (#43943)
The PutJob API accidentally used an "expert" API of CreateIndexRequest.
That API is semi-lenient to syntax; a type could be omitted and the
request would work as expected.  But if a type was omitted it would
not merge with templates correctly, leading to index creation that
only has the template and not the requested mappings in the request.

This commit refactors the PutJob API to:

- Include the type name
- Use a less "expert" API in an attempt to future proof against errors
- Uses an XContentBuilder instead of string replacing, removes json template
2019-08-06 10:53:44 -04:00
Christoph Büscher 3366726ad1 Enable reloading of synonym_graph filters (#45135)
Reloading of synonym_graph filter doesn't work currently because the search time
AnalysisMode doesn't get propagated to the TokenFilterFactory emitted by the
graph filters getChainAwareTokenFilterFactory() method. This change fixes that.

Closes #45127
2019-08-02 15:33:42 +02:00
Tim Vernum e21d58541a
Improve errors when TLS files cannot be read (#45122)
This change improves the exception messages that are thrown when the
system cannot read TLS resources such as keystores, truststores,
certificates, keys or certificate-chains (CAs).

This change specifically handles:

- Files that do not exist
- Files that cannot be read due to file-system permissions
- Files that cannot be read due to the ES security-manager

Backport of: #44787
2019-08-02 12:29:43 +10:00
Tim Vernum 590777150f
Explicitly fail if a realm only exists in keystore (#45091)
There are no realms that can be configured exclusively with secure
settings. Every realm that supports secure settings also requires one
or more non-secure settings.
However, sometimes a node will be configured with entries in the
keystore for which there is nothing in elasticsearch.yml - this may be
because the realm we removed from the yml, but not deleted from the
keystore, or it could be because there was a typo in the realm name
which has accidentially orphaned the keystore entry.

In these cases the realm building would fail, but the error would not
always be clear or point to the root cause (orphaned keystore
entries). RealmSettings would act as though the realm existed, but
then fail because an incorrect combination of settings was provided.

This change causes realm building to fail early, with an explicit
message about incorrect keystore entries.

Backport of: #44471
2019-08-02 12:28:59 +10:00
Martijn van Groningen aae2f0cff2
Merge remote-tracking branch 'es/7.x' into enrich-7.x 2019-08-01 13:38:03 +07:00
Mayya Sharipova 0c68765088
Adds usage stats for vectors (#45023)
Example of usage:

_xpack/usage

"vectors": {
    "available": true,
    "enabled": true,
    "dense_vector_fields_count" : 1,
    "sparse_vector_fields_count" : 1,
    "dense_vector_dims_avg_count" : 100
}
Backport for #44512
2019-07-31 12:32:41 -04:00
Lee Hinman 598c4e72f9
[7.x] Rename indexlifecycle to ilm and snapshotlifecycle to sl… (#44977)
* Rename indexlifecycle to ilm and snapshotlifecycle to slm (#44917)

As a followup to #44725 and #44608, which renamed the packages within
the x-pack project, this renames the packages within the core x-pack
project. It also renames 'snapshotlifecycle' within the HLRC to slm.

* Fix one more import
2019-07-29 15:51:14 -06:00
Benjamin Trent 3b514f0dae
[ML] update Instant serialization (#44765) (#44954)
* [ML] update Instant serialization

* addressing PR comments

* removing unused import
2019-07-29 13:06:56 -05:00
Martijn van Groningen db49cb505e
Merge remote-tracking branch 'es/7.x' into enrich-7.x 2019-07-29 14:45:10 +07:00
Gordon Brown d4b2d21339
Add option to filter ILM explain response (#44777)
In order to make it easier to interpret the output of the ILM Explain
API, this commit adds two request parameters to that API:

- `only_managed`, which causes the response to only contain indices
  which have `index.lifecycle.name` set
- `only_errors`, which causes the response to contain only indices in an
  ILM error state

"Error state" is defined as either being in the `ERROR` step or having
`index.lifecycle.name` set to a policy that does not exist.
2019-07-26 11:57:38 -04:00
Przemysław Witek 79121ea127
[7.x] Implement exponential average search time per hour statistics. (#44683) (#44897) 2019-07-26 15:56:34 +02:00
Przemysław Witek 8bb8543fdf
Treat PostDataActionResponse.DataCounts.bucketCount as incremental rather than absolute (total). (#44803) (#44856) 2019-07-25 20:46:56 +02:00
James Baiera c5528a25e6 Merge branch '7.x' into enrich-7.x 2019-07-25 13:12:56 -04:00
Hendrik Muhs 2ca6306452 do not assert on indexer state (#44854)
remove the unreliable check for the state change

fixes #44813
2019-07-25 16:39:24 +02:00
David Roberts b2e969f4ba
[ML-DataFrame] Remove ID field from data frame indexer stats (#44848)
This is a followup to #44350. The indexer stats used to
be persisted standalone, but now are only persisted as
part of a state-and-stats document. During the review
of #44350 it was decided that we'll stick with this
design, so there will never be a need for an indexer
stats object to store its transform ID as it is stored
on the enclosing document. This PR removes the indexer
stats document ID.

Backport of #44768
2019-07-25 15:19:32 +01:00
Przemysław Witek 53f409e5ae
Add result_type field to TimingStats and DatafeedTimingStats documents (#44812) (#44841) 2019-07-25 10:11:55 +02:00
Gordon Brown 2ac54da60a
Fix swapped variables in error message (#44300)
The alias name and index were in the incorrect order in this error
message. This commit corrects the order.
2019-07-24 11:38:15 -04:00
Przemysław Witek 26da573e94
[ML] [7.x] Only emit deprecation warning if there was actual change of a datafeed's job_id. (#44755)
* Only emit deprecation warning if there was actual change of a datafeed's job_id.
* Add @Deprecated annotation to DatafeedUpdate.Builder#setJobId method
2019-07-24 10:03:25 +02:00
David Roberts caf9411a72
[ML] Improve response format of data frame stats endpoint (#44743)
This change adjusts the data frame transforms stats
endpoint to return a structure that is easier to
understand.

This is a breaking change for clients of the data frame
transforms stats endpoint, but the feature is in beta so
stability is not guaranteed.

Backport of #44350
2019-07-23 18:00:50 +01:00
Przemysław Witek 16c8e18013
Deprecate the ability to update datafeed's job_id. (#44691) (#44742) 2019-07-23 14:48:56 +02:00
Jason Tedor 5878bde8dc
Rename SLM package to slm (#44608)
This commit renames the SLM package from snapshotlifecycle to slm. We
have all come to know index lifecycle management as ILM, the APIs and
settings use ilm, and it would be nice of the package did too. For SLM,
let's use slm for all of these including the package name from the
beginning.
2019-07-23 07:35:06 +09:00
Benjamin Trent 4456850a8e
[7.x] [ML][Data Frame] Add optional defer_validation param to PUT (#44455) (#44697)
* [ML][Data Frame] Add optional defer_validation param to PUT (#44455)

* [ML][Data Frame] Add optional defer_validation param to PUT

* addressing PR comments

* reverting bad replace

* addressing pr comments

* Update put-transform.asciidoc

* Update put-transform.asciidoc

* Update put-transform.asciidoc

* adjusting for backport

* fixing imports

* [DOCS] Fixes formatting in  create data frame transform API
2019-07-22 15:12:55 -05:00
James Baiera fc20264b99 Add Enrich index background task to cleanup old indices (#43746)
This PR adds a background maintenance task that is scheduled on the master node only.
The deletion of an index is based on if it is not linked to a policy or if the enrich alias is not
currently pointing at it. Synchronization has been added to make sure that no policy
executions are running at the time of cleanup, and if any executions do occur, the marking
process delays cleanup until next run.
2019-07-22 14:41:22 -04:00
Benjamin Trent 06e21f7902
[7.x] [ML][Data Frame] adding force delete (#44590) (#44696)
* [ML][Data Frame] adding force delete (#44590)

* [ML][Data Frame] adding force delete

* Update delete-transform.asciidoc

* adjusting for backport
2019-07-22 13:13:25 -05:00
Hendrik Muhs 4387d81e5b add a test to check excecution flow (#44481)
add a test for the execution flow of a2p indexer
2019-07-22 17:07:11 +02:00
Tal Levy 7c84636029
Remove StreamOutput #writeOptionalStreamable and #writeStreamableList (#44602) (#44643)
remove usages of writeOptionalStreamable and writeStreambaleList

relates #34389.
2019-07-19 15:55:53 -07:00
Benjamin Trent 2e303fc5f7
[ML][Data Frame] adding dynamic cluster setting for failure retries (#44577) (#44639)
This adds a new dynamic cluster setting `xpack.data_frame.num_transform_failure_retries`.

This setting indicates how many times non-critical failures should be retried before a data frame transform is marked as failed and should stop executing. At the time of this commit; Min: 0, Max: 100, Default: 10
2019-07-19 16:17:39 -05:00
Ryan Ernst f193d14764
Convert remaining Action Response/Request to writeable.reader (#44528) (#44607)
This commit converts readFrom to ctor with StreamInput on the remaining
ActionResponse and ActionRequest classes.

relates #34389
2019-07-19 13:33:38 -07:00
Christoph Büscher eafe54c81c Fix AnalysisMode propagation in NamedAnalyzer (#44626)
NamedAnalyzer should return the same AnalysisMode than any custom analyzer it
wraps, otherwise AnalysisMode.ALL. This used to be only CustomAnalyzer in the
past, but with the introduction of the ReloadableCustomAnalyzer this needs to be
added as an option where the analysis mode gets propagated.

Closes #44625
2019-07-19 18:18:43 +02:00
Ryan Ernst 60785a9fa8
Convert several direct uses of Streamable to Writeable (#44586) (#44604)
This commit converts several utility classes that implement Streamable
to have StreamInput constructors. It also adds a default version of
readFrom to Streamable so that overriding to throw UOE is not necessary.

relates #34389
2019-07-18 21:25:44 -07:00
Ryan Ernst 13f46aa801
Convert index and persistent actions/response to writeable (#44582) (#44601)
This commit converts several more classes from streamable to writeable
in server, mostly within the o.e.index and o.e.persistent packages.

relates #34389
2019-07-18 18:32:09 -07:00
Tal Levy 03f5084ac7
remove usages of #readOptionalStreamable, #readStreamableList. (#44578) (#44598)
This commit removes references to Streamable from StreamInput.

This is all a part of the effort to remove Streamable usage.

relates #34389.
2019-07-18 16:19:02 -07:00
Lee Hinman 3001f7941f Allow empty configuration for SLM policies (#44465)
* Allow empty configuration for SLM policies

When putting or updating a snapshot lifecycle policy it was not possible
to elide the `config` map. This commit makes the configuration optional,
the same way that it is when taking a snapshot.

Relates to #38461

* Add Objects.requireNonNull for required parts of the policy
2019-07-18 16:20:31 -06:00
Lee Hinman fe2ef66e45 Expose index age in ILM explain output (#44457)
* Expose index age in ILM explain output

This adds the index's age to the ILM explain output, for example:

```
{
  "indices" : {
    "ilm-000001" : {
      "index" : "ilm-000001",
      "managed" : true,
      "policy" : "full-lifecycle",
      "lifecycle_date" : "2019-07-16T19:48:22.294Z",
      "lifecycle_date_millis" : 1563306502294,
      "age" : "1.34m",
      "phase" : "hot",
      "phase_time" : "2019-07-16T19:48:22.487Z",
      ... etc ...
    }
  }
}
```

This age can be used to tell when ILM will transition the index to the
next phase, based on that phase's `min_age`.

Resolves #38988

* Expose age in getters and in HLRC
2019-07-18 15:33:45 -06:00
Tal Levy c8a8915b27
migrate rollup/monitoring/graph/watcher actions to Writeable (#44464) (#44538)
this commit migrates leftover actions from a few x-pack plugins
to the new Writeable.Reader infrastructure.

relates #34389.
2019-07-18 08:42:56 -07:00
Christoph Büscher e9f257b4d9 Fix ReloadDetailsTests compilation 2019-07-18 11:55:22 +02:00
Christoph Büscher eed2db8947 Add stream serialization to ReloadAnalyzersResponse (#44420)
This change adds Writeable support to ReloadAnalyzersResponse that
is required when using the transport client in 7.x.

Closes #44383
2019-07-18 11:11:54 +02:00
David Kyle 0fc091f166
Enable XLint warnings for ML (#44346)
Removes the warning suppression -Xlint:-deprecation,-rawtypes,-serial,-try,-unchecked.
Many warnings were unchecked warnings in the test code often because of the use of mocks.
These are suppressed with @SuppressWarning
2019-07-18 09:33:37 +01:00
Ryan Ernst edd26339c5
Convert remaining request classes in xpack core to writeable.reader (#44524) (#44534)
This commit converts all remaining classes extending ActionRequest
in xpack core to have a StreamInput constructor.

relates #34389
2019-07-18 01:11:45 -07:00
Tal Levy a5ad59451c
migrate more ML actions off of using Request suppliers (#44462) (#44529)
many classes still use the Streamable constructors of HandledTransportAction,
this commit moves more of those classes to the new Writeable constructors.

relates #34389.
2019-07-17 20:28:29 -07:00
Tal Levy 075a3f0e99
remove usage of ActionType#(String) (#44459) (#44526)
this commit removes usage of the deprecated
constructor with a single argument and no Writeable.Reader.

The purpose of this is to reduce the boilerplate necessary for
properly implementing a new action, as well as reducing the
chances of using the incorrect super constructor while classes
are being migrated to Writeable

relates #34389.
2019-07-17 20:28:11 -07:00
Ryan Ernst 2a2686e6e7
Convert remaining ActionTypes to writeable in xpack core (#44467) (#44525)
This commit converts all remaining ActionType response classes to
writeable in xpack core. It also converts a few from server which were
used by xpack core.

relates #34389
2019-07-17 18:01:45 -07:00
Ryan Ernst 17c4b2b839
Convert MasterNodeRequest to implement Writeable.Reader (#44452) (#44513)
This commit converts all MasterNodeRequest subclasses to fullfill
Writeable.Reader constructors.

relates #34389
2019-07-17 18:01:29 -07:00
Ryan Ernst 0755a13c9f
Convert AcknowledgedRequest to Writeable.Reader (#44412) (#44454)
This commit adds constructors to AcknolwedgedRequest subclasses to
implement Writeable.Reader, and ensures all future subclasses implement
the same.

relates #34389
2019-07-17 11:17:36 -07:00
Yannick Welsch d98b3e4760
Move frozen indices to x-pack module (#44490)
Backport of #44408 and #44286.
2019-07-17 16:53:10 +02:00
Ryan Ernst 6e50bafa8f
Convert Broadcast request and response to use writeable.reader (#44386) (#44453)
This commit converts the request and response classes for broadcast
actions to implement ctors for Writeable.Reader and forces all future
implementations to implement the same.

relates #34389
2019-07-16 23:24:02 -07:00
Tal Levy 901310a826
[7.x] Migrate ML Actions to use writeable ActionType (#44302) (#44391)
* Migrate ML Actions to use writeable ActionType (#44302)

This commit converts all the StreamableResponseActionType
actions in the ML core module to be ActionType and leverage
the Writeable infrastructure.
2019-07-16 12:41:10 -07:00
Przemysław Witek 9613700a63
[7.x] Implement MlConfigIndexMappingsFullClusterRestartIT test which verifies that .ml-config index mappings are properly updated during cluster upgrade (#44341) (#44366) 2019-07-16 21:22:40 +02:00
James Baiera a3ba11cfc1
Improve CryptoService error message on missing secure file (#43623) (#44364)
This improves the error message when encrypting of sensitive watcher
data is configured, but no system file was specified in the keystore.
This error message is displayed on startup.

This also closes the input stream of the secure file properly.

Closes #43619
2019-07-16 13:58:57 -04:00
Benjamin Trent 2c7ff812da
[ML] Add r_squared eval metric to regression (#44248) (#44378)
* [ML] Add r_squared eval metric to regression

* fixing tests and binarysoftclassification class

* Update RSquared.java

* Update x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/ml/dataframe/evaluation/regression/RSquared.java

Co-Authored-By: David Kyle <david.kyle@elastic.co>

* removing unnecessary debug test
2019-07-16 11:11:31 -05:00
Lee Hinman fb0461ac76
[7.x] Add Snapshot Lifecycle Management (#44382)
* Add Snapshot Lifecycle Management (#43934)

* Add SnapshotLifecycleService and related CRUD APIs

This commit adds `SnapshotLifecycleService` as a new service under the ilm
plugin. This service handles snapshot lifecycle policies by scheduling based on
the policies defined schedule.

This also includes the get, put, and delete APIs for these policies

Relates to #38461

* Make scheduledJobIds return an immutable set

* Use Object.equals for SnapshotLifecyclePolicy

* Remove unneeded TODO

* Implement ToXContentFragment on SnapshotLifecyclePolicyItem

* Copy contents of the scheduledJobIds

* Handle snapshot lifecycle policy updates and deletions (#40062)

(Note this is a PR against the `snapshot-lifecycle-management` feature branch)

This adds logic to `SnapshotLifecycleService` to handle updates and deletes for
snapshot policies. Policies with incremented versions have the old policy
cancelled and the new one scheduled. Deleted policies have their schedules
cancelled when they are no longer present in the cluster state metadata.

Relates to #38461

* Take a snapshot for the policy when the SLM policy is triggered (#40383)

(This is a PR for the `snapshot-lifecycle-management` branch)

This commit fills in `SnapshotLifecycleTask` to actually perform the
snapshotting when the policy is triggered. Currently there is no handling of the
results (other than logging) as that will be added in subsequent work.

This also adds unit tests and an integration test that schedules a policy and
ensures that a snapshot is correctly taken.

Relates to #38461

* Record most recent snapshot policy success/failure (#40619)

Keeping a record of the results of the successes and failures will aid
troubleshooting of policies and make users more confident that their
snapshots are being taken as expected.

This is the first step toward writing history in a more permanent
fashion.

* Validate snapshot lifecycle policies (#40654)

(This is a PR against the `snapshot-lifecycle-management` branch)

With the commit, we now validate the content of snapshot lifecycle policies when
the policy is being created or updated. This checks for the validity of the id,
name, schedule, and repository. Additionally, cluster state is checked to ensure
that the repository exists prior to the lifecycle being added to the cluster
state.

Part of #38461

* Hook SLM into ILM's start and stop APIs (#40871)

(This pull request is for the `snapshot-lifecycle-management` branch)

This change allows the existing `/_ilm/stop` and `/_ilm/start` APIs to also
manage snapshot lifecycle scheduling. When ILM is stopped all scheduled jobs are
cancelled.

Relates to #38461

* Add tests for SnapshotLifecyclePolicyItem (#40912)

Adds serialization tests for SnapshotLifecyclePolicyItem.

* Fix improper import in build.gradle after master merge

* Add human readable version of modified date for snapshot lifecycle policy (#41035)

* Add human readable version of modified date for snapshot lifecycle policy

This small change changes it from:

```
...
"modified_date": 1554843903242,
...
```

To

```
...
"modified_date" : "2019-04-09T21:05:03.242Z",
"modified_date_millis" : 1554843903242,
...
```

Including the `"modified_date"` field when the `?human` field is used.

Relates to #38461

* Fix test

* Add API to execute SLM policy on demand (#41038)

This commit adds the ability to perform a snapshot on demand for a policy. This
can be useful to take a snapshot immediately prior to performing some sort of
maintenance.

```json
PUT /_ilm/snapshot/<policy>/_execute
```

And it returns the response with the generated snapshot name:

```json
{
  "snapshot_name" : "production-snap-2019.04.09-rfyv3j9qreixkdbnfuw0ug"
}
```

Note that this does not allow waiting for the snapshot, and the snapshot could
still fail. It *does* record this information into the cluster state similar to
a regularly trigged SLM job.

Relates to #38461

* Add next_execution to SLM policy metadata (#41221)

* Add next_execution to SLM policy metadata

This adds the next time a snapshot lifecycle policy will be executed when
retriving a policy's metadata, for example:

```json
GET /_ilm/snapshot?human
{
  "production" : {
    "version" : 1,
    "modified_date" : "2019-04-15T21:16:21.865Z",
    "modified_date_millis" : 1555362981865,
    "policy" : {
      "name" : "<production-snap-{now/d}>",
      "schedule" : "*/30 * * * * ?",
      "repository" : "repo",
      "config" : {
        "indices" : [
          "foo-*",
          "important"
        ],
        "ignore_unavailable" : true,
        "include_global_state" : false
      }
    },
    "next_execution" : "2019-04-15T21:16:30.000Z",
    "next_execution_millis" : 1555362990000
  },
  "other" : {
    "version" : 1,
    "modified_date" : "2019-04-15T21:12:19.959Z",
    "modified_date_millis" : 1555362739959,
    "policy" : {
      "name" : "<other-snap-{now/d}>",
      "schedule" : "0 30 2 * * ?",
      "repository" : "repo",
      "config" : {
        "indices" : [
          "other"
        ],
        "ignore_unavailable" : false,
        "include_global_state" : true
      }
    },
    "next_execution" : "2019-04-16T02:30:00.000Z",
    "next_execution_millis" : 1555381800000
  }
}
```

Relates to #38461

* Fix and enhance tests

* Figured out how to Cron

* Change SLM endpoint from /_ilm/* to /_slm/* (#41320)

This commit changes the endpoint for snapshot lifecycle management from:

```
GET /_ilm/snapshot/<policy>
```

to:

```
GET /_slm/policy/<policy>
```

It mimics the ILM path only using `slm` instead of `ilm`.

Relates to #38461

* Add initial documentation for SLM (#41510)

* Add initial documentation for SLM

This adds the initial documentation for snapshot lifecycle management.

It also includes the REST spec API json files since they're sort of
documentation.

Relates to #38461

* Add `manage_slm` and `read_slm` roles (#41607)

* Add `manage_slm` and `read_slm` roles

This adds two more built in roles -

`manage_slm` which has permission to perform any of the SLM actions, as well as
stopping, starting, and retrieving the operation status of ILM.

`read_slm` which has permission to retrieve snapshot lifecycle policies as well
as retrieving the operation status of ILM.

Relates to #38461

* Add execute to the test

* Fix ilm -> slm typo in test

* Record SLM history into an index (#41707)

It is useful to have a record of the actions that Snapshot Lifecycle
Management takes, especially for the purposes of alerting when a
snapshot fails or has not been taken successfully for a certain amount of
time.

This adds the infrastructure to record SLM actions into an index that
can be queried at leisure, along with a lifecycle policy so that this
history does not grow without bound.

Additionally,
SLM automatically setting up an index + lifecycle policy leads to
`index_lifecycle` custom metadata in the cluster state, which some of
the ML tests don't know how to deal with due to setting up custom
`NamedXContentRegistry`s.  Watcher would cause the same problem, but it
is already disabled (for the same reason).

* High Level Rest Client support for SLM (#41767)

* High Level Rest Client support for SLM

This commit add HLRC support for SLM.

Relates to #38461

* Fill out documentation tests with tags

* Add more callouts and asciidoc for HLRC

* Update javadoc links to real locations

* Add security test testing SLM cluster privileges (#42678)

* Add security test testing SLM cluster privileges

This adds a test to `PermissionsIT` that uses the `manage_slm` and `read_slm`
cluster privileges.

Relates to #38461

* Don't redefine vars

*  Add Getting Started Guide for SLM  (#42878)

This commit adds a basic Getting Started Guide for SLM.

* Include SLM policy name in Snapshot metadata (#43132)

Keep track of which SLM policy in the metadata field of the Snapshots
taken by SLM. This allows users to more easily understand where the
snapshot came from, and will enable future SLM features such as
retention policies.

* Fix compilation after master merge

* [TEST] Move exception wrapping for devious exception throwing

Fixes an issue where an exception was created from one line and thrown in another.

* Fix SLM for the change to AcknowledgedResponse

* Add Snapshot Lifecycle Management Package Docs (#43535)

* Fix compilation for transport actions now that task is required

* Add a note mentioning the privileges needed for SLM (#43708)

* Add a note mentioning the privileges needed for SLM

This adds a note to the top of the "getting started with SLM"
documentation mentioning that there are two built-in privileges to
assist with creating roles for SLM users and administrators.

Relates to #38461

* Mention that you can create snapshots for indices you can't read

* Fix REST tests for new number of cluster privileges

* Mute testThatNonExistingTemplatesAreAddedImmediately (#43951)

* Fix SnapshotHistoryStoreTests after merge

* Remove overridden newResponse functions that have been removed

* Fix compilation for backport

* Fix get snapshot output parsing in test

* [DOCS] Add redirects for removed autogen anchors (#44380)

* Switch <tt>...</tt> in javadocs for {@code ...}
2019-07-16 07:37:13 -06:00
Hendrik Muhs 6c1f740759
[ML-DataFrame] make checkpointing more robust (#44344) (#44414)
make checkpointing more robust:

- do not let checkpointing fail if indexes got deleted
- treat missing seqNoStats as just created indices (checkpoint 0)
- loglevel: do not treat failed updated checks as error

fixes #43992
2019-07-16 13:43:13 +02:00
Przemysław Witek 3f3a3d3f2b
[7.x] Add DatafeedTimingStats.average_search_time_per_bucket_ms and TimingStats.total_bucket_processing_time_ms stats (#44125) (#44404) 2019-07-16 12:51:29 +02:00
Ryan Ernst c4cf98c538
Convert core security actions to use writeable ActionType (#44359) (#44390)
This commit converts all the StreamableResponseActionType security
classes in xpack core to ActionType, implementing Writeable for their
response classes.

relates #34389
2019-07-16 01:11:13 -07:00
Jason Tedor be98a12cd0
Do not swallow I/O exception getting authentication (#44398)
When getting authentication info from the thread context, it might be
that we encounter an I/O exception. Today we swallow this exception and
return a null authentication info to the caller. Yet, this could be
hiding bugs or errors. This commits adjusts this behavior so that we no
longer swallow the exception.
2019-07-16 16:14:15 +09:00
Ryan Ernst e0b82e92f3
Convert BaseNode(s) Request/Response classes to Writeable (#44301) (#44358)
This commit converts all BaseNodeResponse and BaseNodesResponse
subclasses to implement Writeable.Reader instead of Streamable.

relates #34389
2019-07-15 18:07:52 -07:00
Yannick Welsch a848fc9bf4 Revert "Add usage stats for frozen indices (#44286)"
This reverts commit 5e73c49ec8.
2019-07-15 21:41:25 +02:00
Yannick Welsch 7b68bfb4e6 Revert "Add frozen indices usage for all but transport client (#44286)"
This reverts commit d2d40afc02.
2019-07-15 21:41:21 +02:00
Yannick Welsch d2d40afc02 Add frozen indices usage for all but transport client (#44286)
Backport gone wrong.
2019-07-15 20:49:23 +02:00
Ryan Ernst 59658daef9
Separate streamable based master node actions (#44313)
This commit creates new base classes for master node actions whose
response types still implement Streamable. This simplifies both finding
remaining classes to convert, as well as creating new master node
actions that use Writeable for their responses.

relates #34389
2019-07-15 09:20:20 -07:00
Yannick Welsch 5e73c49ec8 Add usage stats for frozen indices (#44286)
Adds usage stats for frozen indices of the form:

  "frozen_indices" : {
    "available" : true,
    "enabled" : true,
    "indices_count" : 0
  }
2019-07-15 17:34:46 +02:00
Ryan Ernst fc6a31e141 Use specific version constant for wire bwc check (#44316)
This commit modifies bwc behavior in FindFileStructureAction to check
against a concrete version instead of Version.CURRENT. Checking against
Version.CURRENT does not work since it is changing, in addition to it
having different meanings on each branch.

relates #42501
2019-07-14 19:05:14 -07:00
Hendrik Muhs 684b562381
[7.x][ML-DataFrame] Rewrite continuous logic to prevent terms count limit (#44287)
Rewrites how continuous data frame transforms calculates and handles buckets that require an update. Instead of storing the whole set in memory, it pages through the updates using a 2nd cursor. This lowers memory consumption and prevents problems with limits at query time (max_terms_count). The list of updates can be re-retrieved in a failure case (#43662)
2019-07-13 06:58:04 +02:00
Ryan Ernst 1dcf53465c Reorder HandledTransportAction ctor args (#44291)
This commit moves the Supplier variant of HandledTransportAction to have
a different ordering than the Writeable.Reader variant. The Supplier
version is used for the legacy Streamable, and currently having the
location of the Writeable.Reader vs Supplier in the same place forces
using casts of Writeable.Reader to select the correct super constructor.
This change in ordering allows easier migration to Writeable.Reader.

relates #34389
2019-07-12 13:45:09 -07:00
Benjamin Trent 79c62fd724
[ML][Data Frame] Fixing default delay set in timesync (#44281) (#44293)
* [ML][Data Frame] Fixing default delay set in timesync

* disallowing explicit null, don't do duration check on write
2019-07-12 15:21:47 -05:00
Benjamin Trent 68cd675892
[ML][Data Frame] responding with 409 status code when failing _stop (#44231) (#44276)
* [ML][Data Frame] responding with appropriate status code when failing _stop

* adding null checks for persistent task data

* addressing PR comments
2019-07-12 10:10:24 -05:00
Przemysław Witek dd5f4ae00e
Update .ml-config mappings before indexing job, datafeed or df analytics config (#44216) (#44273) 2019-07-12 16:49:48 +02:00
Benjamin Trent 40cc081ad3
[ML][Data Frame] adds index validations to _start data frame transform (#44191) (#44227)
* [ML][Data Frame] adds index validations to _start data frame transform

* addressing pr comments
2019-07-11 12:50:50 -05:00
Benjamin Trent c82d9c5b50
[ML] Adds support for regression.mean_squared_error to eval API (#44140) (#44218)
* [ML] Adds support for regression.mean_squared_error to eval API

* addressing PR comments

* fixing tests
2019-07-11 09:22:52 -05:00
Ryan Ernst fb77d8f461 Removed writeTo from TransportResponse and ActionResponse (#44092)
The base classes for transport requests and responses currently
implement Streamable and Writeable. The writeTo method on these base
classes is implemented with an empty implementation. Not only does this
complicate subclasses to think they need to call super.writeTo, but it
also can lead to not implementing writeTo when it should have been
implemented, or extendiong one of these classes when not necessary,
since there is nothing to actually implement.

This commit removes the empty writeTo from these base classes, and fixes
subclasses to not call super and in some cases implement an empty
writeTo themselves.

relates #34389
2019-07-10 12:42:04 -07:00
Michael Basnight d2c3f4bae9 Validate read priv of enrich source indices (#43595)
This commit adds permissions validation on the indices provided in the
enrich policy. These indices should be validated at store time so as not
to have cryptic error messages in the event the user does not have
permissions to access said indices.
2019-07-10 13:09:10 -05:00
Zachary Tong 92ad588275
Remove generic on AggregatorFactory (#43664) (#44079)
AggregatorFactory was generic over itself, but it doesn't appear we
use this functionality anywhere (e.g. to allow the super class
to declare arguments/return types generically for subclasses to
override).  Most places use a wildcard constraint, and even when a
concrete type is specified it wasn't used.

But since AggFactories are widely used, this led to
the generic touching many pieces of code and making type signatures
fairly complex
2019-07-10 13:20:28 -04:00
Przemysław Witek 44781e415e
[7.x] [ML] Add DatafeedTimingStats to datafeed GetDatafeedStatsAction.Response (#43045) (#44118) 2019-07-10 11:51:44 +02:00
David Roberts cb62d4acdf [ML-DataFrame] Add a frequency option to transform config, default 1m (#44120)
Previously a data frame transform would check whether the
source index was changed every 10 seconds. Sometimes it
may be desirable for the check to be done less frequently.
This commit increases the default to 60 seconds but also
allows the frequency to be overridden by a setting in the
data frame transform config.
2019-07-10 09:59:00 +01:00
Armin Braun 9eac5ceb1b
Dry up inputstream to bytesreference (#43675) (#44094)
* Dry up Reading InputStream to BytesReference
* Dry up spots where we use the same pattern to get from an InputStream to a BytesReferences
2019-07-09 09:18:25 +02:00
Dimitris Athanasiou a1a62fded3
[7.x][ML] Stop df-analytics action request should filter tasks (#44016) (#44023)
As a `BaseTasksRequest`, `StopDataFrameAnalyticsAction.Request` should
implement a `match` method that makes sure only df-analytics tasks
are applied.
2019-07-05 23:10:45 +03:00
Jim Ferenczi cdf55cb5c5 Refactor index engines to manage readers instead of searchers (#43860)
This commit changes the way we manage refreshes in the index engines.
Instead of relying on a SearcherManager, this change uses a ReaderManager that
creates ElasticsearchDirectoryReader when needed. Searchers are now created on-demand
(when acquireSearcher is called) from the current ElasticsearchDirectoryReader.
It also slightly changes the Engine.Searcher to extend IndexSearcher in order
to simplify the usage in the consumer.
2019-07-04 22:49:43 +02:00
Martijn van Groningen 7ba6e1752a
required changes after merge 2019-07-04 13:17:22 +02:00
Martijn van Groningen 653f1436a0
Merge remote-tracking branch 'es/7.x' into enrich-7.x 2019-07-04 13:05:10 +02:00
Christoph Büscher 662f517f4e Add _reload_search_analyzers endpoint to HLRC (#43733)
This change adds the new endpoint that allows reloading of search analyzers to
the high-level java rest client.

Relates to #43313
2019-07-03 12:05:59 +02:00
Dimitris Athanasiou 96b0b27f18
[7.x][ML] Set df-analytics task state to failed when appropriate (#43880) (#43906)
This introduces a `failed` state to which the data frame analytics
persistent task is set to when something unexpected fails. It could
be the process crashing, the results processor hitting some error,
etc. The failure message is then captured and set on the task state.
From there, it becomes available via the _stats API as `failure_reason`.

The df-analytics stop API now has a `force` boolean parameter. This allows
the user to call it for a failed task in order to reset it to `stopped` after
we have ensured the failure has been communicated to the user.

This commit also adds the analytics version in the persistent task
params as this allows us to prevent tasks to run on unsuitable nodes in
the future.
2019-07-03 12:41:56 +03:00
Alexander Reelsen 9077c4402f Watcher: Allow to execute actions for each element in array (#41997)
This adds the ability to execute an action for each element that occurs
in an array, for example you could sent a dedicated slack action for
each search hit returned from a search.

There is also a limit for the number of actions executed, which is
hardcoded to 100 right now, to prevent having watches run forever.

The watch history logs each action result and the total number of actions
the were executed.

Relates #34546
2019-07-03 11:28:50 +02:00
Tim Vernum 2a8f30eb9a
Support builtin privileges in get privileges API (#43901)
Adds a new "/_security/privilege/_builtin" endpoint so that builtin
index and cluster privileges can be retrieved via the Rest API

Backport of: #42134
2019-07-03 19:08:28 +10:00
Tim Vernum 31b19bd022
Use separate BitSet cache in Doc Level Security (#43899)
Document level security was depending on the shared
"BitsetFilterCache" which (by design) never expires its entries.

However, when using DLS queries - particularly templated ones - the
number (and memory usage) of generated bitsets can be significant.

This change introduces a new cache specifically for BitSets used in
DLS queries, that has memory usage constraints and access time expiry.

The whole cache is automatically cleared if the role cache is cleared.
Individual bitsets are cleared when the corresponding lucene index
reader is closed.

The cache defaults to 50MB, and entries expire if unused for 7 days.

Backport of: #43669
2019-07-03 18:04:06 +10:00
Benjamin Trent fb825a6470
[7.x] [ML][Data Frame] add node attr to GET _stats (#43842) (#43894)
* [ML][Data Frame] add node attr to GET _stats (#43842)

* [ML][Data Frame] add node attr to GET _stats

* addressing testing issues with node.attributes

* adjusting for backport
2019-07-02 19:35:37 -05:00
Christoph Büscher 31cf96e7bf Return reloaded analyzers in _reload_search_ananlyzer response (#43813)
Currently the repsonse of the "_reload_search_analyzer" endpoint contains the
index names and nodeIds of indices were analyzers reloading was triggered. This
change add the names of the search-time analyzers that were reloaded.

Closes #43804
2019-07-02 18:51:15 +02:00
Dimitris Athanasiou 1ea53979b5
[7.x][ML] Get df-analytics action should require monitor privilege (#43831) (#43866) 2019-07-02 16:00:54 +03:00
Tim Vernum 8d099dad38
Add "manage_api_key" cluster privilege (#43865)
This adds a new cluster privilege for manage_api_key. Users with this
privilege are able to create new API keys (as a child of their own
user identity) and may also get and invalidate any/all API keys
(including those owned by other users).

Backport of: #43728
2019-07-02 21:57:42 +10:00
Benjamin Trent b95ee7ebb2
[7.x] [ML][Data Frame] using transform creation version for node assignment (#43764) (#43843)
* [ML][Data Frame] using transform creation version for node assignment (#43764)

* [ML][Data Frame] using transform creation version for node assignment

* removing unused imports

* Addressing PR comment

* adjusing for backport
2019-07-02 06:52:34 -05:00
Benjamin Trent 82c1ddc117
[7.x] [ML][Data Frame] Add deduced mappings to _preview response payload (#43742) (#43849)
* [ML][Data Frame] Add deduced mappings to _preview response payload (#43742)

* [ML][Data Frame] Add deduced mappings to _preview response payload

* updating preview docs

* fixing code for backport
2019-07-02 06:52:14 -05:00
Tanguy Leroux b977f019b8
Expose translog stats in ReadOnlyEngine (#43752) (#43823)
Backport of #43752 for 7.x.
2019-07-02 13:39:00 +02:00
Yogesh Gaikwad 031d5e96ac
HLRC changes for kerberos grant type (#43642) (#43822)
The TODO from last PR for kerbero grant type was missed.
This commit adds the changes for kerberos grant type in HLRC.
2019-07-02 00:55:02 +10:00
Benjamin Trent 4c95c0c456
[ML][Data Frame] reduce audit frequency, change log msg, and level (#43771) (#43818) 2019-07-01 09:14:26 -05:00
Julie Tibshirani ffa5919d7c
Add support for 'flattened object' fields. (#43762)
This commit merges the `object-fields` feature branch. The new 'flattened
object' field type allows an entire JSON object to be indexed into a field, and
provides limited search functionality over the field's contents.
2019-07-01 12:08:50 +03:00
Dimitris Athanasiou 3bdb9d5f08
[7.x][ML] Correct df-analytics version introduced to 7.3.0 (#43784) (#43795) 2019-07-01 09:19:04 +03:00
Ryan Ernst 3a2c698ce0
Rename Action to ActionType (#43778)
Action is a class that encapsulates meta information about an action
that allows it to be called remotely, specifically the action name and
response type. With recent refactoring, the action class can now be
constructed as a static constant, instead of needing to create a
subclass. This makes the old pattern of creating a singleton INSTANCE
both misnamed and lacking a common placement.

This commit renames Action to ActionType, thus allowing the old INSTANCE
naming pattern to be TYPE on the transport action itself. ActionType
also conveys that this class is also not the action itself, although
this change does not rename any concrete classes as those will be
removed organically as they are converted to TYPE constants.

relates #34389
2019-06-30 22:00:17 -07:00
Martijn van Groningen adcba69d96
required changes after merge 2019-06-30 21:40:40 +02:00
Martijn van Groningen eb8e03bc8b
Merge remote-tracking branch 'es/7.x' into enrich-7.x 2019-06-30 21:32:51 +02:00
Ryan Ernst 28ab77a023
Add StreamableResponseAction to aid in deprecation of Streamable (#43770)
The Action base class currently works for both Streamable and Writeable
response types. This commit intorduces StreamableResponseAction, for
which only the legacy Action implementions which provide newResponse()
will extend. This eliminates the need for overriding newResponse() with
an UnsupportedOperationException.

relates #34389
2019-06-28 21:40:00 -07:00
Benjamin Trent 67a3c656c3
[7.x] [ML][Data Frame] removing format support (#43659) (#43747)
* [ML][Data Frame] removing format support (#43659)

* Fixing conflicts
2019-06-28 10:02:37 -05:00
Jim Ferenczi 7ca69db83f Refactor IndexSearcherWrapper to disallow the wrapping of IndexSearcher (#43645)
This change removes the ability to wrap an IndexSearcher in plugins. The IndexSearcherWrapper is replaced by an IndexReaderWrapper and allows to wrap the DirectoryReader only. This simplifies the creation of the context IndexSearcher that is used on a per request basis. This change also moves the optimization that was implemented in the security index searcher wrapper to the ContextIndexSearcher that now checks the live docs to determine how the search should be executed. If the underlying live docs is a sparse bit set the searcher will compute the intersection
betweeen the query and the live docs instead of checking the live docs on every document that match the query.
2019-06-28 16:28:02 +02:00
Dimitris Athanasiou 86c853a7c2
[7.x][ML] Rename outlier score setting to feature_influence_threshold (#43705) (#43734)
Renames outlier score setting `minimum_score_to_write_feature_influence`
to `feature_influence_threshold`.
2019-06-28 13:28:25 +03:00
Dimitris Athanasiou cab879118d
[7.x][ML] Support multiple source indices for df-analytics (#43702) (#43731)
This commit adds support for multiple source indices.
In order to deal with multiple indices having different mappings,
it attempts a best-effort approach to merge the mappings assuming
there are no conflicts. In case conflicts exists an error will be
returned.

To allow users creating custom mappings for special use cases,
the destination index is now allowed to exist before the analytics
job runs. In addition, settings are no longer copied except for
the `index.number_of_shards` and `index.number_of_replicas`.
2019-06-28 13:28:03 +03:00
Christoph Büscher 2cc7f5a744
Allow reloading of search time analyzers (#43313)
Currently changing resources (like dictionaries, synonym files etc...) of search
time analyzers is only possible by closing an index, changing the underlying
resource (e.g. synonym files) and then re-opening the index for the change to
take effect.

This PR adds a new API endpoint that allows triggering reloading of certain
analysis resources (currently token filters) that will then pick up changes in
underlying file resources. To achieve this we introduce a new type of custom
analyzer (ReloadableCustomAnalyzer) that uses a ReuseStrategy that allows
swapping out analysis components. Custom analyzers that contain filters that are
markes as "updateable" will automatically choose this implementation. This PR
also adds this capability to `synonym` token filters for use in search time
analyzers.

Relates to #29051
2019-06-28 09:55:40 +02:00
Przemysław Witek 94f18da5df
Add version and create_time to data frame analytics config (#43683) (#43712) 2019-06-28 07:37:21 +02:00
Ryan Ernst 5b4089e57e
Remove nodeId from BaseNodeRequest (#43658)
TransportNodesAction provides a mechanism to easily broadcast a request
to many nodes, and collect the respones into a high level response. Each
node has its own request type, with a base class of BaseNodeRequest.
This base request requires passing the nodeId to which the request will
be sent. However, that nodeId is not used anywhere. It is private to the
base class, yet serialized to each node, where the node could just as
easily find the nodeId of the node it is on locally.

This commit removes passing the nodeId through to the node request
creation, and guards its serialization so that we can remove the base
request class altogether in the future.
2019-06-27 18:45:14 -07:00
Nhat Nguyen ce8771feb7 Do not use MockInternalEngine in GatewayIndexStateIT (#43716)
GatewayIndexStateIT#testRecoverBrokenIndexMetadata replies on the
flushing on shutdown. This behaviour, however, can be randomly disabled
in MockInternalEngine.

Closes #43034
2019-06-27 18:28:04 -04:00
Przemysław Witek 68dbbd8793
Deduplicate two similar TimeUtils classes. (#43697)
* Deduplicate org.elasticsearch.xpack.core.dataframe.utils.TimeUtils and org.elasticsearch.xpack.core.ml.utils.time.TimeUtils into a common class: org.elasticsearch.xpack.core.common.time.TimeUtils.
* Add unit tests for parseTimeField and parseTimeFieldToInstant methods
2019-06-27 18:51:48 +02:00
Jim Ferenczi 329d05f61e Fix UOE on search requests that match a sparse role query (#43668)
Search requests executed through the SecurityIndexSearcherWrapper throw
an UnsupportedOperationException if they match a sparse role query.
When low level cancellation is activated (which is the default since #42857),
the context index searcher creates a weight that doesn't handle #scorer.
This change fixes this bug and adds a test to ensure that we check this case.
2019-06-27 16:56:56 +02:00
Martijn van Groningen 683e116601
Merge remote-tracking branch 'es/7.x' into enrich-7.x 2019-06-27 08:35:37 +02:00
Benjamin Trent 52e26bbc42
[ML][Data Frame] improve pivot nested field validations (#43548) (#43636)
* [ML][Data Frame] improve pivot nested field validations

* addressing pr comments
2019-06-26 13:35:51 -05:00
Benjamin Trent c121b00c98
[7.x] [ML][Data Frame] Add support for allow_no_match for endpoints (#43490) (#43637)
* [ML][Data Frame] Add support for allow_no_match for endpoints (#43490)

* [ML][Data Frame] Add support for allow_no_match parameter in endpoints

Adds support for:
* Get Transforms
* Get Transforms stats
* stop transforms

* Update DataFrameTransformDocumentationIT.java
2019-06-26 10:09:56 -05:00
Yannick Welsch 2049f715b3 Add voting-only master node (#43410)
A voting-only master-eligible node is a node that can participate in master elections but will not act
as a master in the cluster. In particular, a voting-only node can help elect another master-eligible
node as master, and can serve as a tiebreaker in elections. High availability (HA) clusters require at
least three master-eligible nodes, so that if one of the three nodes is down, then the remaining two
can still elect a master amongst them-selves. This only requires one of the two remaining nodes to
have the capability to act as master, but both need to have voting powers. This means that one of
the three master-eligible nodes can be made as voting-only. If this voting-only node is a dedicated
master, a less powerful machine or a smaller heap-size can be chosen for this node. Alternatively, a
voting-only non-dedicated master node can play the role of the third master-eligible node, which
allows running an HA cluster with only two dedicated master nodes.

Closes #14340

Co-authored-by: David Turner <david.turner@elastic.co>
2019-06-26 08:07:56 +02:00
Yogesh Gaikwad 480453aa24
Make role descriptors optional when creating API keys (#43481) (#43614)
This commit changes the `role_descriptors` field from required
to optional when creating API key. The default behavior in .NET ES
client is to omit properties with `null` value requiring additional
workarounds. The behavior for the API does not change.
Field names (`id`, `name`) in the invalidate api keys API documentation have been
corrected where they were wrong.

Closes #42053
2019-06-26 14:30:51 +10:00
Przemysław Witek 76a750a0a0
Remove unused mapStringsOrdered method (#42513) (#43585) 2019-06-25 20:43:38 +02:00
Dimitris Athanasiou 126c2fd2d5
[7.x][ML] Machine learning data frame analytics (#43544) (#43592)
This merges the initial work that adds a framework for performing
machine learning analytics on data frames. The feature is currently experimental
and requires a platinum license. Note that the original commits can be
found in the `feature-ml-data-frame-analytics` branch.

A new set of APIs is added which allows the creation of data frame analytics
jobs. Configuration allows specifying different types of analysis to be performed
on a data frame. At first there is support for outlier detection.

The APIs are:

- PUT _ml/data_frame/analysis/{id}
- GET _ml/data_frame/analysis/{id}
- GET _ml/data_frame/analysis/{id}/_stats
- POST _ml/data_frame/analysis/{id}/_start
- POST _ml/data_frame/analysis/{id}/_stop
- DELETE _ml/data_frame/analysis/{id}

When a data frame analytics job is started a persistent task is created and started.
The main steps of the task are:

1. reindex the source index into the dest index
2. analyze the data through the data_frame_analyzer c++ process
3. merge the results of the process back into the destination index

In addition, an evaluation API is added which packages commonly used metrics
that provide evaluation of various analysis:

- POST _ml/data_frame/_evaluate
2019-06-25 20:29:11 +03:00
Przemysław Witek c702cd7415
[7.x] Implement XContentParser.genericMap and XContentParser.genericMapOrdered methods (#42059) (#43575) 2019-06-25 16:04:54 +02:00
Przemysław Witek b15e40ffad
Extract TimingStats-related functionality into TimingStatsReporter (#43371) (#43557) 2019-06-25 15:48:39 +02:00
Martijn van Groningen 36f0e8a8bb
Added multi node enrich tests and fixed serialization issues. (#43386)
The test for now tests the enrich APIs in a multi node environment.
Picked EsIntegTestCase test over a real qa module in order to avoid
adding another module that starts a test cluster.
2019-06-25 14:03:10 +02:00
Martijn van Groningen d0634e444d
Fixed compile errors after merge. 2019-06-25 10:12:16 +02:00
Martijn van Groningen f587519f17
Merge remote-tracking branch 'es/7.x' into enrich-7.x 2019-06-25 10:09:51 +02:00
Gordon Brown fac7efba9a
[7.x] Account for node versions during allocation in ILM Shrink (#43300)
This commit ensures that ILM's Shrink action will take node versions into
account when choosing which node to allocate to when shrinking an
index. Prior to this change, ILM could pick a node with a lower version
than some shards are already allocated to, which causes the new
allocation to fail as shards can't be relocated onto a node with a lower
version than they are already on.

As part of this, when making the decision about which node to allocate
to prior to Shrink, all shards in the index are considered, rather than
choosing a random shard to consider.

Further, the unit tests for the logic that chooses a node to allocate
shards to pre-shrink has been improved to validate the behavior in more
realistic and varied initial conditions.
2019-06-24 10:02:49 -06:00
Michael Basnight 6945e5d5e6 Add role for enrich processor (#42677)
This commit adds the manage_enrich privilege, which grants access to all
of the enrich processor lifecycle actions. In addition this commit also
creates a role which grants access to the generated indices.

Relates #41939

Co-authored-by: Martijn van Groningen <martijn.v.groningen@gmail.com>
2019-06-24 10:47:01 -05:00
Martijn van Groningen 101cf384ba
Replace Streamable w/ Writable in AcknowledgedResponse and subclasses (backport 7.x) (#43525)
This commit replaces usages of Streamable with Writeable for the
AcknowledgedResponse and its subclasses, plus associated actions.

Note that where possible response fields were made final and default
constructors were removed.

This is a large PR, but the change is mostly mechanical.

Relates to #34389
Backport of #43414
2019-06-24 13:47:37 +02:00
Martijn van Groningen df9f06213d
Merge remote-tracking branch 'es/7.x' into enrich-7.x 2019-06-21 19:58:04 +02:00
Benjamin Trent f4b75d6d14
[7.x] [ML][Data Frame] Add version and create_time to transform config (#43384) (#43480)
* [ML][Data Frame] Add version and create_time to transform config (#43384)

* [ML][Data Frame] Add version and create_time to transform config

* s/transform_version/version s/Date/Instant

* fixing getter/setter for version

* adjusting for backport
2019-06-21 09:11:44 -05:00
Simon Willnauer 424ef4f158 SecurityIndexSearcherWrapper doesn't always carry over caches and similarity (#43436)
If DocumentLevelSecurity is enabled SecurityIndexSearcherWrapper doesn't carry
over the cache, cache policy and similarity from the incoming searcher.
2019-06-21 10:19:10 +02:00
Tim Vernum 059eb55108
Use SecureString for password length validation (#43465)
This replaces the use of char[] in the password length validation
code, with the use of SecureString

Although the use of char[] is not in itself problematic, using a
SecureString encourages callers to think about the lifetime of the
password object and to clear it after use.

Backport of: #42884
2019-06-21 17:11:07 +10:00
Yannick Welsch 7f8e1454ab Advance checkpoints only after persisting ops (#43205)
Local and global checkpoints currently do not correctly reflect what's persisted to disk. The issue is
that the local checkpoint is adapted as soon as an operation is processed (but not fsynced yet). This
leaves room for the history below the global checkpoint to still change in case of a crash. As we rely
on global checkpoints for CCR as well as operation-based recoveries, this has the risk of shard
copies / follower clusters going out of sync.

This commit required changing some core classes in the system:

- The LocalCheckpointTracker keeps track now not only of the information whether an operation has
been processed, but also whether that operation has been persisted to disk.
- TranslogWriter now keeps track of the sequence numbers that have not been fsynced yet. Once
they are fsynced, TranslogWriter notifies LocalCheckpointTracker of this.
- ReplicationTracker now keeps track of the persisted local and persisted global checkpoints of all
shard copies when in primary mode. The computed global checkpoint (which represents the
minimum of all persisted local checkpoints of all in-sync shard copies), which was previously stored
in the checkpoint entry for the local shard copy, has been moved to an extra field.
- The periodic global checkpoint sync now also takes async durability into account, where the local
checkpoints on shards only advance when the translog is asynchronously fsynced. This means that
the previous condition to detect inactivity (max sequence number is equal to global checkpoint) is
not sufficient anymore.
- The new index closing API does not work when combined with async durability. The shard
verification step is now requires an additional pre-flight step to fsync the translog, so that the main
verify shard step has the most up-to-date global checkpoint at disposition.
2019-06-20 11:12:38 +02:00
Martijn van Groningen 9de4e878f7
Merge remote-tracking branch 'es/7.x' into enrich-7.x 2019-06-20 09:44:31 +02:00
Jason Tedor 1f1a035def
Remove stale test logging annotations (#43403)
This commit removes some very old test logging annotations that appeared
to be added to investigate test failures that are long since closed. If
these are needed, they can be added back on a case-by-case basis with a
comment associating them to a test failure.
2019-06-19 22:58:22 -04:00
Benjamin Trent 77ce3260dd
[ML][Data Frame] make response.count be total count of hits (#43241) (#43389)
* [ML][Data Frame] make response.count be total count of hits

* addressing line length check

* changing response count for filters

* adjusting serialization, variable name, and total count logic

* making count mandatory for creation
2019-06-19 16:19:06 -05:00
Benjamin Trent b333ced5a7
[7.x] [ML][Data Frame] adds new pipeline field to dest config (#43124) (#43388)
* [ML][Data Frame] adds new pipeline field to dest config (#43124)

* [ML][Data Frame] adds new pipeline field to dest config

* Adding pipeline support to _preview

* removing unused import

* moving towards extracting _source from pipeline simulation

* fixing permission requirement, adding _index entry to doc

* adjusting for java 8 compatibility

* adjusting bwc serialization version to 7.3.0
2019-06-19 16:18:27 -05:00
Christos Soulios d1637ca476
Backport: Refactor aggregation base classes to remove doEquals() and doHashCode() (#43363)
This PR is a backport a of #43214 from v8.0.0

A number of the aggregation base classes have an abstract doEquals() and doHashCode() (e.g. InternalAggregation.java, AbstractPipelineAggregationBuilder.java).

Theoretically this is so the sub-classes can add to the equals/hashCode and don't need to worry about calling super.equals(). In practice, it's mostly just confusing/inconsistent. And if there are more than two levels, we end up with situations like InternalMappedSignificantTerms which has to call super.doEquals() which defeats the point of having these overridable methods.

This PR removes the do versions and just use equals/hashCode ensuring the super when necessary.
2019-06-19 22:31:06 +03:00
Gordon Brown 23a3471394
Fix randomization in testPerformActionAttrsRequestFails (#43304)
The randomization in this test would occasionally generate duplicate
node attribute keys, causing spurious test failures. This commit adjusts
the randomization to not generate duplicate keys and cleans up the data
structure used to hold the generated keys.
2019-06-19 10:34:39 -06:00
Yogesh Gaikwad 2f173402ec
Add kerberos grant_type to get token in exchange for Kerberos ticket (#42847) (#43355)
Kibana wants to create access_token/refresh_token pair using Token
management APIs in exchange for kerberos tickets. `client_credentials`
grant_type requires every user to have `cluster:admin/xpack/security/token/create`
cluster privilege.

This commit introduces `_kerberos` grant_type for generating `access_token`
and `refresh_token` in exchange for a valid base64 encoded kerberos ticket.
In addition, `kibana_user` role now has cluster privilege to create tokens.
This allows Kibana to create access_token/refresh_token pair in exchange for
kerberos tickets.

Note:
The lifetime from the kerberos ticket is not used in ES and so even after it expires
the access_token/refresh_token pair will be valid. Care must be taken to invalidate
such tokens using token management APIs if required.

Closes #41943
2019-06-19 18:26:52 +10:00
Mayya Sharipova aa6248d4d7
Move dense_vector and sparse_vector to module (#43280) (#43333) 2019-06-18 11:56:04 -04:00
Lee Hinman 21da84edbc Make ILM force merging best effort (#43246)
It's possible for force merges kicked off by ILM to silently stop (due
to a node relocating for example). In which case, the segment count may
not reach what the user configured. In the subsequent `SegmentCountStep`
waiting for the expected segment count may wait indefinitely. Because of
this, this commit makes force merges "best effort" and then changes the
`SegmentCountStep` to simply report (at INFO level) if the merge was not
successful.

Relates to #42824
Resolves #43245
2019-06-17 08:45:22 -06:00
Przemysław Witek b2613a123d
[7.x] Report exponential_avg_bucket_processing_time which gives more weight to recent buckets (#43189) (#43263) 2019-06-17 08:58:26 +02:00
Przemysław Witek 65a584b6fb
[7.x] Report timing stats as part of the Job stats response (#42709) (#43193) 2019-06-14 09:03:14 +02:00
Jason Tedor 5bc3b7f741
Enable node roles to be pluggable (#43175)
This commit introduces the possibility for a plugin to introduce
additional node roles.
2019-06-13 15:15:48 -04:00
Martijn van Groningen 1f3db7eb3e
Merge remote-tracking branch 'es/7.x' into enrich-7.x 2019-06-13 16:49:38 +02:00
Simon Willnauer f70141c862 Only load FST off heap if we are actually using mmaps for the term dictionary (#43158)
Given the significant performance impact that NIOFS has when term dicts are
loaded off-heap this change enforces FstLoadMode#AUTO that loads term dicts
off heap only if the underlying index input indicates a memory map.

Relates to #43150
2019-06-13 07:54:02 +02:00
David Kyle 597ae5c7b8 [ML DataFrame] Reject Data Frame Ids containing upper case characters (#43145) 2019-06-12 18:13:18 +01:00
Benjamin Trent 7ff3d86cf0
[ML][Data Frame] adding dest.index and id validations (#43053) (#43109)
* [ML][Data Frame] adding dest.index and id validations

* adjusting message format

* Adjusting id validity pattern

* Update DataFrameStrings.java
2019-06-11 15:55:18 -05:00
Ryan Ernst 172cd4dbfa Remove description from xpack feature sets (#43065)
The description field of xpack featuresets is optionally part of the
xpack info api, when using the verbose flag. However, this information
is unnecessary, as it is better left for documentation (and the existing
descriptions describe anything meaningful). This commit removes the
description field from feature sets.
2019-06-11 09:22:58 -07:00
Dimitris Athanasiou 76a92b49a8
[ML] Get resources action should be lenient when sort field is unmapped (#42991) (#43046)
Get resources action sorts on the resource id. When there are no resources at
all, then it is possible the index does not contain a mapping for the resource
id field. In that case, the search api fails by default.

This commit adjusts the search request to ignore unmapped fields.

Closes elastic/kibana#37870
2019-06-10 19:50:19 +03:00
Alan Woodward 8e23e4518a Move construction of custom analyzers into AnalysisRegistry (#42940)
Both TransportAnalyzeAction and CategorizationAnalyzer have logic to build
custom analyzers for index-independent analysis. A lot of this code is duplicated,
and it requires the AnalysisRegistry to expose a number of internal provider
classes, as well as making some assumptions about when analysis components are
constructed.

This commit moves the build logic directly into AnalysisRegistry, reducing the
registry's API surface considerably.
2019-06-10 14:33:25 +01:00
Jason Tedor 63bad28005
Do not allow modify aliases on followers (#43017)
Now that aliases are replicated by a follower from its leader, this
commit prevents directly modifying aliases on follower indices.
2019-06-09 22:53:54 -04:00
Jason Tedor 915d2f2daa
Refactor put mapping request validation for reuse (#43005)
This commit refactors put mapping request validation for reuse. The
concrete case that we are after here is the ability to apply effectively
the same framework to indices aliases requests. This commit refactors
the put mapping request validation framework to allow for that.
2019-06-09 10:19:04 -04:00
Tim Vernum 090d42d3e6
Permit API Keys on Basic License (#42973)
Kibana alerting is going to be built using API Keys, and should be
permitted on a basic license.

This commit moves API Keys (but not Tokens) to the Basic license

Relates: elastic/kibana#36836
Backport of: #42787
2019-06-07 14:18:05 +10:00
henryptung 61b62125b8 Wire query cache into sorting nested-filter computation (#42906)
Don't use Lucene's default query cache when filtering in sort.

Closes #42813
2019-06-06 21:16:58 +02:00
Ioannis Kakavas ba5b1d3249 Mute failing testPerformActionAttrsRequestFails (#42933) 2019-06-06 14:28:50 +03:00
David Roberts b202a59f88 [ML] Add earliest and latest timestamps to field stats (#42890)
This change adds the earliest and latest timestamps into
the field stats for fields of type "date" in the output of
the ML find_file_structure endpoint.  This will enable the
cards for date fields in the file data visualizer in the UI
to be made to look more similar to the cards for date
fields in the index data visualizer in the UI.
2019-06-06 08:58:35 +01:00
Przemyslaw Gomulka cfdb1b771e
Enable console audit logs for docker backport#42671 #42887
Enable audit logs in docker by creating console appenders for audit loggers.
also rename field @timestamp to timestamp and add field type with value audit

The docker build contains now two log4j configuration for oss or default versions. The build now allows override the default configuration.

Also changed the format of a timestamp from ISO8601 to include time zone as per this discussion #36833 (comment)

closes #42666
backport#42671
2019-06-05 17:15:37 +02:00
Benjamin Trent 293f306b9a
[ML][Data Frame] forcing that no ptask => STOPPED state (#42800) (#42860)
* [ML][Data Frame] forcing that no ptask => STOPPED state

* Addressing side-effect, early exit for stop when stopped
2019-06-05 07:09:34 -05:00
Simon Willnauer ebec118ccf Bring back ExecutionException after backport 2019-06-05 12:10:02 +02:00
Simon Willnauer 41a9f3ae3b Use reader attributes to control term dict memory useage (#42838)
This change makes use of the reader attributes added in LUCENE-8671
to ensure that `_id` fields are always on-heap for best update performance
and term dicts are generally off-heap on Read-Only engines.

Closes #38390
2019-06-05 11:01:06 +02:00
Jason Tedor 117df87b2b
Replicate aliases in cross-cluster replication (#42875)
This commit adds functionality so that aliases that are manipulated on
leader indices are replicated by the shard follow tasks to the follower
indices. Note that we ignore write indices. This is due to the fact that
follower indices do not receive direct writes so the concept is not
useful.

Relates #41815
2019-06-04 20:36:24 -04:00
Mark Vieira e44b8b1e2e
[Backport] Remove dependency substitutions 7.x (#42866)
* Remove unnecessary usage of Gradle dependency substitution rules (#42773)

(cherry picked from commit 12d583dbf6f7d44f00aa365e34fc7e937c3c61f7)
2019-06-04 13:50:23 -07:00
David Roberts b61202b0a8 [ML] Add a limit on line merging in find_file_structure (#42501)
When analysing a semi-structured text file the
find_file_structure endpoint merges lines to form
multi-line messages using the assumption that the
first line in each message contains the timestamp.
However, if the timestamp is misdetected then this
can lead to excessive numbers of lines being merged
to form massive messages.

This commit adds a line_merge_size_limit setting
(default 10000 characters) that halts the analysis
if a message bigger than this is created.  This
prevents significant CPU time being spent subsequently
trying to determine the internal structure of the
huge bogus messages.
2019-06-03 13:45:51 +01:00
Benjamin Trent 0253927ec4
[ML Data Frame] Refactor stop logic (#42644) (#42763)
* Revert "invalid test"

This reverts commit 9dd8b52c13c716918ff97e6527aaf43aefc4695d.

* Testing

* mend

* Revert "[ML Data Frame] Mute Data Frame tests"

This reverts commit 5d837fa312b0e41a77a65462667a2d92d1114567.

* Call onStop and onAbort outside atomic update

* Don’t update CS

* Tidying up

* Remove invalid test that asserted logic that has been removed

* Add stopped event

* Revert "Add stopped event"

This reverts commit 02ba992f4818bebd838e1c7678bd2e1cc090bfab.

* Adding check for STOPPED in saveState
2019-06-03 06:53:44 -05:00
David Roberts 87ca762573 [ML] Add Kibana application privilege to data frame admin/user roles (#42757)
Data frame transforms are restricted by different roles to ML, but
share the ML UI.  To prevent the ML UI being hidden for users who
only have the data frame admin or user role, it is necessary to add
the ML Kibana application privilege to the backend data frame roles.
2019-05-31 15:41:59 +01:00
Przemysław Witek f6779de2b7
Increase maximum forecast interval to 10 years. (#41082) (#42710)
Increase the maximum duration to ~10 years (3650 days).
2019-05-31 06:19:47 +02:00
James Baiera 215170b6c3 Merge branch '7.x' into enrich-7.x 2019-05-30 16:13:06 -04:00
Jason Tedor 371cb9a8ce
Remove Log4j 1.2 API as a dependency (#42702)
We had this as a dependency for legacy dependencies that still needed
the Log4j 1.2 API. This appears to no longer be necessary, so this
commit removes this artifact as a dependency.

To remove this dependency, we had to fix a few places where we were
accidentally relying on Log4j 1.2 instead of Log4j 2 (easy to do, since
both APIs were on the compile-time classpath).

Finally, we can remove our custom Netty logger factory. This was needed
when we were on Log4j 1.2 and handled logging in our own unique
way. When we migrated to Log4j 2 we could have dropped this
dependency. However, even then Netty would still pick up Log4j 1.2 since
it was on the classpath, thus the advantage to removing this as a
dependency now.
2019-05-30 16:08:07 -04:00
Jay Modi 711de2f59a
Make hashed token ids url safe (#42651)
This commit changes the way token ids are hashed so that the output is
url safe without requiring encoding. This follows the pattern that we
use for document ids that are autogenerated, see UUIDs and the
associated classes for additional details.
2019-05-30 10:44:41 -06:00
Hendrik Muhs 345ff21ae5 [ML-DataFrame] rewrite start and stop to answer with acknowledged (#42589)
rewrite start and stop to answer with acknowledged

fixes #42450
2019-05-29 11:14:32 +02:00
Michael Basnight 77eed9e6a0 Add enrich policy GET API (#41384)
This commit wires up the Rest calls and Transport calls for GET enrich
policy, as well as tests and rest spec additions.
2019-05-28 23:19:23 -05:00
Michael Basnight be60125a4e Merge branch '7.x' into enrich-7.x 2019-05-28 18:32:18 -05:00
Hendrik Muhs ace96a2b6e check position before and after latch (#42623)
check position before and after latch

#fixes 42084
2019-05-28 21:23:29 +02:00
Daniel Mitterdorfer 635ce0ca6d
Mute AsyncTwoPhaseIndexerTests#testStateMachine() (#42610)
Relates #42084
Relates #42609
2019-05-28 10:20:22 +02:00
Armin Braun a5ca20a250
Some Cleanup in o.e.i.engine (#42278) (#42566)
* Some Cleanup in o.e.i.engine

* Remove dead code and parameters
* Reduce visibility in some obvious spots
* Add missing `assert`s (not that important here since the methods
themselves will probably be dead-code eliminated) but still
2019-05-27 11:04:54 +02:00
Martijn van Groningen a91cec4c46
Merge remote-tracking branch 'es/7.x' into enrich-7.x 2019-05-27 10:18:02 +02:00
Tanguy Leroux 6bec876682 Improve Close Index Response (#39687)
This changes the `CloseIndexResponse` so that it reports closing result
for each index. Shard failures or exception are also reported per index,
and the global acknowledgment flag is computed from the index results
only.

The response looks like:
```
{
  "acknowledged" : true,
  "shards_acknowledged" : true,
  "indices" : {
    "docs" : {
      "closed" : true
    }
  }
}
```

The response reports shard failures like:
```
{
  "acknowledged" : false,
  "shards_acknowledged" : false,
  "indices" : {
    "docs-1" : {
      "closed" : true
    },
    "docs-2" : {
      "closed" : false,
      "shards" : {
        "1" : {
          "failures" : [
            {
              "shard" : 1,
              "index" : "docs-2",
              "status" : "BAD_REQUEST",
              "reason" : {
                "type" : "index_closed_exception",
                "reason" : "closed",
                "index_uuid" : "JFmQwr_aSPiZbkAH_KEF7A",
                "index" : "docs-2"
              }
            }
          ]
        }
      }
    },
    "docs-3" : {
      "closed" : true
    }
  }
}
```

Co-authored-by: Tanguy Leroux <tlrx.dev@gmail.com>
2019-05-24 21:57:55 -04:00
Hendrik Muhs 6d47ee9268 [ML-DataFrame] add support for fixed_interval, calendar_interval, remove interval (#42427)
* add support for fixed_interval, calendar_interval, remove interval

* adapt HLRC

* checkstyle

* add a hlrc to server test

* adapt yml test

* improve naming and doc

* improve interface and add test code for hlrc to server

* address review comments

* repair merge conflict

* fix date patterns

* address review comments

* remove assert for warning

* improve exception message

* use constants
2019-05-24 20:30:17 +02:00
Michael Basnight 2325ffb757 Add enrich policy execute API (#41762)
This commit wires up the Rest calls and Transport calls for execute
enrich policy, as well as tests and rest spec additions.
2019-05-24 09:39:29 -05:00
Martijn van Groningen 79fa7d8098
Merge remote-tracking branch 'es/7.x' into enrich-7.x 2019-05-24 11:44:35 +02:00
Tim Vernum 567c0d331f
Fix settings prefix for realm truststore password (#42413)
As part of #30241 realm settings were changed to be true affix
settings. In the process of this change, the "ssl." prefix was lost
from the realm truststore password. It should be:

    xpack.security.authc.realms.<type>.<name>.ssl.truststore.password

Due to a mismatch between the way we define SSL settings and  load SSL
contexts, there was no way to define this legacy password setting in a
realm config.

The settings validation would reject "ssl.truststore.password" but the
SSL service would ignore "truststore.password"

Backport of: #42336
2019-05-24 13:16:26 +10:00
Mengwei Ding fa98cbe320
Add .code_internal-* index pattern to kibana user (#42247) (#42387) 2019-05-22 20:25:45 -07:00
Luca Cavanna c2af62455f Cut over SearchResponse and SearchTemplateResponse to Writeable (#41855)
Relates to #34389
2019-05-22 18:47:54 +02:00
Martijn van Groningen 9e514cb161
Remove schedule field from EnrichPolicy (#42143) 2019-05-22 17:13:54 +02:00