This avoids log spam about being unable to create new mappings in indices
that are set to only allow one type. (It doesn't actually have any effect
on the deletion, which was working before despite the failure to create new
mappings for the legacy types referenced by the delete request.)
relates elastic/x-pack-elasticsearch#1634
Original commit: elastic/x-pack-elasticsearch@061ce7acf1
Because:
1. It's pointless, as new detector_index values are assigned when an
analysis_config is parsed
2. It creates a backwards compatibility issue when upgrading from v5.4
Original commit: elastic/x-pack-elasticsearch@2f61aa457e
This is the x-pack side of the removal of `accumulateExceptions()` for both `TransportNodesAction` and `TransportTasksAction`.
There are occasional, random failures that occur during API calls that are silently ignored from the caller's perspective, which also leads to weird API responses that have no response and also no errors, which is obviously untrue.
Original commit: elastic/x-pack-elasticsearch@9b57321549
Detectors now have a field called detector_index. This is also now the
field that needs to be supplied when updating a detector. (Previously
it was simply index, which was confusing.)
When detectors are added to an analysis_config it will reassign
ascending detector_index values starting from 0. The intention is
never to allow deletion of detectors from an analysis_config, but
possibly to allow disabling them in the future. This ensures that
detector_index values in results will always tie up with detector_ids
in the detectors that created them.
relates elastic/x-pack-elasticsearch#1275
Original commit: elastic/x-pack-elasticsearch@20a660b07b
At the end of the test, LocalExporterTests checks if no more monitoring
data are exporter by checking multiple times the last time nodes_stats
documents were exported, stopping after 10 seconds. It does this in a
@After annotated method but it would be better to do this in a finally
block. Also, it should search for node_stats documents only if the
monitoring indices exist and are searchable to avoid some "all shards
failed" failures.
Original commit: elastic/x-pack-elasticsearch@90ffb4affd
Now we've set the option for one type per index it causes a stack trace
in to be logged if we issue a request to delete two documents with
different types. We only do this to cover the case of documents left
over from v5.4. We can avoid it by deleting by query using just the
document IDs.
Original commit: elastic/x-pack-elasticsearch@2abffc7d95
This commit removes ClientProxy and WatcherClientProxy classes. They
were added in times, where there were issues with guice and circular
dependencies. However there is no guice anymore and on top of that
the classes do not add any value.
We can switch to use a regular client, but have to make sure that
the InternalClient is injected in all the transport actions as those
is able to query data, when security is enabled.
Original commit: elastic/x-pack-elasticsearch@763a79b2f7
The goal of this change is to allow datafeeds to start
when the job is in the opening state. This makes the API
more async and it allows clients like the ML UI to open a
job and start its datafeed without having to manage the
complexity of dealing with timeouts due to the job taking
time to open due to restoring a large state.
In order to achieve this, this commit does a number of things:
- accepts a start datafeed request when the job is opening
- adds logic to the DatafeedManager to wait before running the
datafeed task until the job is opened
- refactord the datafeed node selection logic into its own class
- splitd selection issues in critical and non-critical with regard
to creating the datafeed task
- refactord the unit tests to make simpler to write & understand
- adds unit tests for added and modified functionality
- changes the response when the datafeed cannot be started to
be a conflict exception
relates elastic/x-pack-elasticsearch#1535
Original commit: elastic/x-pack-elasticsearch@c83196155d
In preparation for the removal of types, new security types like invalidated-tokens are stored in the .security
index under the generic "doc" type, with a query filter on `doc_type`.
In order to avoid id clashes, we also need to use that doc_type as part of the document id.
relates elastic/x-pack-elasticsearch#1300
Original commit: elastic/x-pack-elasticsearch@469724a228
When an error response contains multiple layers of errors, Kibana displays
the one labelled root_cause. The definition of root_cause is the most
deeply nested ElasticsearchException. Therefore, it is of great benefit to
the UI if our config validation returns the actual problem in an
ElasticsearchException rather than an IllegalArgumentException.
This commit also adds an extra validation check to catch the case of a
single job config containing fields x.y as well as x earlier. Previously
this was caught when we tried to create results mappings, and was
accompanied by an error suggesting that using a dedicated results index
would help, when clearly it won't for a clash in a single job config.
Fixeselastic/x-pack-kibana#1387Fixeselastic/prelert-legacy#349
Original commit: elastic/x-pack-elasticsearch@7d1b7def6c
Otherwise it's possible that the get_filter endpoint can return a filter that's been
deleted. Although this is the behaviour of the search API, specific metadata
management APIs should provide better guarantees.
Original commit: elastic/x-pack-elasticsearch@818495f176
This allows us to build both 5.5.0-SNAPSHOT and 5.4.1-SNAPSHOT
artifacts for backwards compatibility testing. It is a port of
elastic/elasticsearch:24870 to x-pack and will be super useful
when elastic/elasticsearch:24846 is ported to x-pack.
Original commit: elastic/x-pack-elasticsearch@0ea443f488
Previously there were two @After methods in the XPackRestIT class, and
there is no guarantee about the order in which these run. This commit
replaces these with a single @After method that calls the cleanup methods
in a well-defined order.
Original commit: elastic/x-pack-elasticsearch@d3ab366591
The has_privileges API now supports wildcards.
The semantics are that the user must have a superset of the wildcard being checked.
---------------------
Role | Check | Result
---------------------
* | foo* | true
f* | foo* | true
foo* | foo* | true
foo* | foo? | true
foo? | foo? | true
foo? | foo* | false
foo | foo* | false
Original commit: elastic/x-pack-elasticsearch@817550db17
We don't hyphenate metadata anywhere else.
Also added tests for the LdapMetaDataResolver as they were completely absent.
Original commit: elastic/x-pack-elasticsearch@eec647ba93
This deletes tests getting results from MlJobIT since
such tests already exist in a form that is simpler to
understand and maintain in the YAML suite.
Original commit: elastic/x-pack-elasticsearch@b708e24877
This commit means that newly created ML state indices will have a single
type named "doc", and newly persisted state documents will have type
"doc" too.
Retrieving state is only supported for type "doc".
When deleting state, documents with the old types are deleted in addition
to those with type "doc". This means jobs created by the beta can be fully
deleted.
Relates elastic/x-pack-elasticsearch#668
Original commit: elastic/x-pack-elasticsearch@29c07d40f1
This commit adds an internal project call ml-cpp-snapshot which when
built will pull the ml cpp zip file from the prelert bucket. The GET
request has retries added to handle the dynamic aws creds eventual
consistency.
Original commit: elastic/x-pack-elasticsearch@1bba7d0f08
This commit cleans up the check for SSL with client authentication when a PKI realm is enabled by
moving it from the realm to a actual bootstrap check.
A bug was found during this cleanup in the check for transport profiles and that is also fixed in
this commit.
relates elastic/x-pack-elasticsearch#420
Original commit: elastic/x-pack-elasticsearch@3aa6a3edc0
The commit that converted the results index into single type
broke the search for fetching results for renormalization.
This commit fixes that.
Original commit: elastic/x-pack-elasticsearch@1ca7517adc
REST endpoints that support GET and POST need
to also support source parsing. As these
endpoints can accept a body but some clients
do not allow doing a GET with a request body,
elasticsearch has support for parsing via a
source URI parameter. This commit adds source
handling to all such endpoints.
relates elastic/x-pack-elasticsearch#1204
Original commit: elastic/x-pack-elasticsearch@3949ea31fe
Previously we used to normalize records with their buckets. This required
nested scrolls: an outer scroll over buckets, then a nested scroll for
records in each bucket. This was fragile.
The new approach is to simply scroll first through buckets, then through
records. This is made possible because we no longer store max_record_score
on buckets nor bucket anomaly_score on records.
While making these changes I noticed that the PerPartitionMaxProbabilities
class was redundant (because it was storing max_record_score in the case of
per-partition normalization), so I removed it. I also removed a redundant
Map from the Bucket class and fixed its equals() and hashCode() methods.
relates elastic/x-pack-elasticsearch#1115
Original commit: elastic/x-pack-elasticsearch@efbee63573
Deleting a job issues 2 cluster state updates.
The first marks the job as deleted.
The second actually removes the job.
Both check that there is no datafeed referring to the job.
If a put datafeed request arrives between those 2 cluster
state updates, the datafeed gets created and the final
job cluster state update fails. This means we end up with
both the job and the datafeed, but the job's results and
state have been deleted.
This commit changes the behaviour so that the put
datafeed request fails for a job that is marked as deleted
as this scenario is avoiding partially executing an action.
relates elastic/x-pack-elasticsearch#1510
Original commit: elastic/x-pack-elasticsearch@76fa0f0b1a