This commit adds separate tasks for tribe clusteres which the
cluster formation tasks build their own tasks off. This ensures each
cluster will have its own wait task, so that the tribe node will be able
to wait on the other clusters being up before even trying to start.
relates elastic/x-pack-elasticsearch#877
Original commit: elastic/x-pack-elasticsearch@1e4c729372
Detector configs are validated both by our C++ and by our Java code.
If the C++ is stricter than the Java then error reporting is poor.
This commit adds two extra validation checks to the Java code that
were already present in the C++ validation.
relates elastic/x-pack-elasticsearch#856
Original commit: elastic/x-pack-elasticsearch@bd4ce2377c
It's possible for a C++ process to exit between the time when a
config update message for it is queued and the time that message
is processed. This commit ensures we don't spam the log with a
stack trace in this situation, as it's not a problem at all.
relates elastic/x-pack-elasticsearch#891
Original commit: elastic/x-pack-elasticsearch@81af8eaf70
Aggregated data extraction is done in 2 phases:
1. search
2. process response
The first phase cannot be currently cancelled. However, it usually
is the fastest of the two.
The second phase processes the histogram buckets in the search
response into flat JSON and then posts the result stream to the job.
This phase can be split into batches where a few buckets are posted
to the job at a time. Cancelling can then work between batches.
This commit changes the AggregationDataExtractor to process the
search response in batches. The definition of a batch is crucial
as it has to be short enough to allow for responsive cancelling,
yet long enough to minimise overhead due to multiple calls to the
post data action. The number of key-value pairs written by the
processor is a good candidate for a batch size measure. By testing,
1000 seems to be an effective number.
relates elastic/x-pack-elasticsearch#802
Original commit: elastic/x-pack-elasticsearch@ce3a172411
The native process can only handle one operation at a time, so in order the protect against multiple operation at a time (e.g. post data and flush or multiple post data operations) there should be protection in place to guarantee that at most only a single thread interacts with the native process. The current protection is broken when a job close is executed, more specifically the wait logic is broken here.
This commit changes the threading logic when interacting with the native process by using a custom `ExecutorService` that that uses a single worker thread from `ml_autodetect_process` thread pool to interact with the native process. Requests from the ml apis are initially being queued and this worker thread executes these requests one by one in the order they were specified.
Removed the general `ml` threadpool and replaced its usages with `ml_autodetect_process` or `management` threadpool.
Added a new threadpool just for (re)normalizer, so that these operations are isolated from other operations.
relates elastic/x-pack-elasticsearch#582
Original commit: elastic/x-pack-elasticsearch@ff0c8dce0b
The slack tests seem to fail periodically with not output
This commit tries to add some more verbose output by
making the query more broad and take failures into account
to uncover, what happens in this test.
Relates elastic/x-pack-elasticsearch#836
Original commit: elastic/x-pack-elasticsearch@e601b3a0df
This change adds a retain field to model snapshots.
A user can set retain to true/false via the update model snapshot API.
Model snapshots with retain set to true will not be deleted by
the daily maintenance service regardless of whether they expired.
This allows users to keep always keep certain snapshots around for
potentially reverting to in the future.
relates elastic/x-pack-elasticsearch#758
Original commit: elastic/x-pack-elasticsearch@2283989a33
Previously force closing a job required extra privileges. Following
the full discussion about what privileges should be required.
Original commit: elastic/x-pack-elasticsearch@4d85314b35
* Removed OPENING and CLOSING job states. Instead when persistent task has been created and
status hasn't been set then this means we haven't yet started, when the executor changes it to STARTED we have.
The coordinating node will monitor cs for a period of time until that happens and then returns or times out.
* Refactored job close api to go to node running job task and close job there.
* Changed unexpected job and datafeed exception messages to not mention the state and instead mention that job/datafeed haven't yet started/stopped.
Original commit: elastic/x-pack-elasticsearch@37e778b585
Add the `.monitoring-alerts-2` index template via the exporter. This
avoids a very common problem where the user wipes out their monitoring
indices manually, which means that the watches would then create an index
with a dynamic mappings.
This adds a mechanism for posting a template that is not associated with a
Resolver (convenient for the forthcoming work _and_ for a future Logstash
index).
Original commit: elastic/x-pack-elasticsearch@a4cfc48191
The stopped and removeOnCompletion flags are not currently used, this commit removes them for now to simplify things.
Original commit: elastic/x-pack-elasticsearch@c636c2817e
Previously a `kill -9` on the `autodetect` process associated with a
job would leave the job in the OPENED state.
Now if the C++ process dies before a request to close the job is made
then the job state is set to FAILED.
For this purpose C++ process death is defined as end-of-file on the
log stream. (Technically it would be possible to get end-of-file on
the log stream while the C++ process was still running, but this
would also represent an unexpected and undesirable situation.)
Original commit: elastic/x-pack-elasticsearch@2b74c56a79
This to avoid to lose data counts when the job gets restarted on another node.
The job stats api returns live data counts, which may not have been persisted to an index,
so getting the data counts via search api will give us a better guarantee that when
the job gets restarted the datacounts are there too. During job restart a get call is being
done to get data counts in the order to initialize the job.
Original commit: elastic/x-pack-elasticsearch@901952da85