This commit reenables the Monitoring Bulk Api REST tests. The XPackRestIT
now enables/disables the local default exporter before executing the monitoring
tests, and also waits for the monitoring service to be started before executing
the test.
Original commit: elastic/x-pack-elasticsearch@10b696198c
State processing can take a lot longer than log processing, even after
the C++ process has closed its end of the pipe. The pipe has a buffer,
and indexing the state document(s) in that buffer can take more than a
second.
relates elastic/x-pack-elasticsearch#945
Original commit: elastic/x-pack-elasticsearch@65f5075028
* [ML] Set job create time on server
* Job.Builder serialisation tests
* Make setCreateTime package private
Original commit: elastic/x-pack-elasticsearch@d2d75e0d7b
Adds following validations:
- aggregations must contain date_histogram or histogram at the top level
- a date_histogram has to have its time_zone to UTC (or unset which
defaults to UTC)
- a date_histogram supports calendar intervals only up to 1 week
to avoid the length variability of longer intervals
- aggregation interval must be greater than zero
- aggregation interval must be less than or equal to the bucket_span
Original commit: elastic/x-pack-elasticsearch@404496a886
* Remove JobManagers dependency on JobResultsPerister
* Remove unneeded call to refresh the state index
Original commit: elastic/x-pack-elasticsearch@0b2351bba7
If jobs are being deleted then the operations required to get stats
could fail with unexpected exceptions. When stats for multiple jobs
were being requested, this would previously cause the whole operation
to fail.
This commit changes the stats endpoint to ignore jobs that are being
deleted.
Fixeselastic/prelert-legacy#837
Original commit: elastic/x-pack-elasticsearch@6ac141a987
When this happens it means the job has been deleted, which in turn means
the C++ process has been stopped, so there's no need to send it a message
and hence no problem worth logging a stack trace for.
This differs from elastic/x-pack-elasticsearch#896 because elastic/x-pack-elasticsearch#896 was for a similar situation with
closed jobs, whereas this one is for deleted jobs.
Original commit: elastic/x-pack-elasticsearch@9bb4e98fe7
The test is too rigid on checking the right number of node_stats documented that are collected. It happens if a node takes time to start, the node_stats count % numNodes will always be different than 0.
It also adds more logging for LocalBulk failures.
Original commit: elastic/x-pack-elasticsearch@1ebb20b6f6
In order to prevent tasks state updates by stale executors, this commit adds a check for correct allocation id during status update operation.
Original commit: elastic/x-pack-elasticsearch@b94eb0e863
This commit changes the LocalExporterTests so that it now test
various randomized cases in a single test. This should speed up
the test as well as minimize the failures due to multiple start
/stop of the exporter. It also uses the MonitoringBulk API
instead of calling the Exporter instances, which makes more sense
since it is the normal way to index monitoring documents.
Related elastic/x-pack-elasticsearch#416
Original commit: elastic/x-pack-elasticsearch@f8a4af15cd
Detector configs are validated both by our C++ and by our Java code.
If the C++ is stricter than the Java then error reporting is poor.
This commit adds two extra validation checks to the Java code that
were already present in the C++ validation.
relates elastic/x-pack-elasticsearch#856
Original commit: elastic/x-pack-elasticsearch@bd4ce2377c
It's possible for a C++ process to exit between the time when a
config update message for it is queued and the time that message
is processed. This commit ensures we don't spam the log with a
stack trace in this situation, as it's not a problem at all.
relates elastic/x-pack-elasticsearch#891
Original commit: elastic/x-pack-elasticsearch@81af8eaf70
Aggregated data extraction is done in 2 phases:
1. search
2. process response
The first phase cannot be currently cancelled. However, it usually
is the fastest of the two.
The second phase processes the histogram buckets in the search
response into flat JSON and then posts the result stream to the job.
This phase can be split into batches where a few buckets are posted
to the job at a time. Cancelling can then work between batches.
This commit changes the AggregationDataExtractor to process the
search response in batches. The definition of a batch is crucial
as it has to be short enough to allow for responsive cancelling,
yet long enough to minimise overhead due to multiple calls to the
post data action. The number of key-value pairs written by the
processor is a good candidate for a batch size measure. By testing,
1000 seems to be an effective number.
relates elastic/x-pack-elasticsearch#802
Original commit: elastic/x-pack-elasticsearch@ce3a172411
The native process can only handle one operation at a time, so in order the protect against multiple operation at a time (e.g. post data and flush or multiple post data operations) there should be protection in place to guarantee that at most only a single thread interacts with the native process. The current protection is broken when a job close is executed, more specifically the wait logic is broken here.
This commit changes the threading logic when interacting with the native process by using a custom `ExecutorService` that that uses a single worker thread from `ml_autodetect_process` thread pool to interact with the native process. Requests from the ml apis are initially being queued and this worker thread executes these requests one by one in the order they were specified.
Removed the general `ml` threadpool and replaced its usages with `ml_autodetect_process` or `management` threadpool.
Added a new threadpool just for (re)normalizer, so that these operations are isolated from other operations.
relates elastic/x-pack-elasticsearch#582
Original commit: elastic/x-pack-elasticsearch@ff0c8dce0b
The slack tests seem to fail periodically with not output
This commit tries to add some more verbose output by
making the query more broad and take failures into account
to uncover, what happens in this test.
Relates elastic/x-pack-elasticsearch#836
Original commit: elastic/x-pack-elasticsearch@e601b3a0df
This change adds a retain field to model snapshots.
A user can set retain to true/false via the update model snapshot API.
Model snapshots with retain set to true will not be deleted by
the daily maintenance service regardless of whether they expired.
This allows users to keep always keep certain snapshots around for
potentially reverting to in the future.
relates elastic/x-pack-elasticsearch#758
Original commit: elastic/x-pack-elasticsearch@2283989a33
* Removed OPENING and CLOSING job states. Instead when persistent task has been created and
status hasn't been set then this means we haven't yet started, when the executor changes it to STARTED we have.
The coordinating node will monitor cs for a period of time until that happens and then returns or times out.
* Refactored job close api to go to node running job task and close job there.
* Changed unexpected job and datafeed exception messages to not mention the state and instead mention that job/datafeed haven't yet started/stopped.
Original commit: elastic/x-pack-elasticsearch@37e778b585
Add the `.monitoring-alerts-2` index template via the exporter. This
avoids a very common problem where the user wipes out their monitoring
indices manually, which means that the watches would then create an index
with a dynamic mappings.
This adds a mechanism for posting a template that is not associated with a
Resolver (convenient for the forthcoming work _and_ for a future Logstash
index).
Original commit: elastic/x-pack-elasticsearch@a4cfc48191
The stopped and removeOnCompletion flags are not currently used, this commit removes them for now to simplify things.
Original commit: elastic/x-pack-elasticsearch@c636c2817e
Previously a `kill -9` on the `autodetect` process associated with a
job would leave the job in the OPENED state.
Now if the C++ process dies before a request to close the job is made
then the job state is set to FAILED.
For this purpose C++ process death is defined as end-of-file on the
log stream. (Technically it would be possible to get end-of-file on
the log stream while the C++ process was still running, but this
would also represent an unexpected and undesirable situation.)
Original commit: elastic/x-pack-elasticsearch@2b74c56a79
This to avoid to lose data counts when the job gets restarted on another node.
The job stats api returns live data counts, which may not have been persisted to an index,
so getting the data counts via search api will give us a better guarantee that when
the job gets restarted the datacounts are there too. During job restart a get call is being
done to get data counts in the order to initialize the job.
Original commit: elastic/x-pack-elasticsearch@901952da85
Previously the GET/PUT/DELETE filters actions were master node actions. This is not necessary since the filters are stored in an index rather than the cluster state. This change makes the actions extend `HandledTransportAction` so they can be run on any node.
The change also makes PutFilterAction.TransportAction use the TransportBulkAction instead of the deprecated TransportIndexAction.
relates elastic/x-pack-elasticsearch#756
Original commit: elastic/x-pack-elasticsearch@c6df04382e
This commit makes the MonitoringDoc immutable and removes the type() and id() methods from "resolvers" so that they are not anymore in charge of computing the documents types and ids. Now each MonitoringDoc knows its type and is able to compute its own id if needed.
Original commit: elastic/x-pack-elasticsearch@5161cedcc8
This commit marks the x-pack plugin as having a native controller. This
is now a requirement in core for any plugin that forks a native process
to display a warning to the user when they install the plugin.
Relates elastic/x-pack-elasticsearch#839
Original commit: elastic/x-pack-elasticsearch@3529250023
Refactors NodePersistentTask and RunningPersistentTask into a single AllocatedPersistentTask. Makes it possible to update Persistent Task Status via AllocatedPersistentTask.
Original commit: elastic/x-pack-elasticsearch@8f59d7b819
All of our code supports configuring email addresses in the
email action not only via a JSON array, but also via a comma
separated value (we also have tests for this). However in one bit
we did not support this, where an email template is rendered to
a concrete email.
This commit fixes the last piece, so that users will be able to
specify comma separated email adresses.
The main use case for this is having an array of email addresses,
that can be joined in mustache with a comma in order to send to
several recipients.
Original commit: elastic/x-pack-elasticsearch@19794ba612
This commit ensures that upon reopening a job, the in-memory
model size stats are correctly initialized from the ones
last persisted in the results index.
This fixes the bug that could be seen upon opening a job
that has processed data and immediately calling its _stats
API only to see the model size stats are zero.
In addition, this PR refactors getting the parameters needed to
open an autodetect job:
- Previously, there was a method chaining together multiple
callbacks to the job provider.
- These methods were retrieving data via GETs which is not
going to work with index rollover.
Note, this PR is not eliminating all GETs. More work is needed
to fully support index rollover.
relates elastic/x-pack-elasticsearch#801
Original commit: elastic/x-pack-elasticsearch@1ef1d44b32
The xcontent parser was only set to read all data to a map
which did not work, when the returned data was in form of an
array (for example the cat API is doing this, if the response
format is set to JSON).
relates elastic/x-pack-elasticsearch#351
Original commit: elastic/x-pack-elasticsearch@08ad457bf6
If forced, the internal RemovePersistentTasks API is invoked instead of going through
ML. This will remove the task, which should trigger the task framework to do
necessary cleanup.
At that point, the Delete* APIs interpret a missing task as CLOSED/STOPPED,
so they can be removed regardless of the original state.
Original commit: elastic/x-pack-elasticsearch@bff23c7840
* [ML] Support all XContent types in Data API
This changes the POST Data API so that it accepts all XContent types instead of just JSON.
For now the datafeed is restricted to only sending JSON to the POST data API.
* Rename SimpleJsonRecordReader to XContentRecordReader
Also renames `DataFormat.JSON` to `DataFormat.XCONTENT`
* fixes YAML tests
Original commit: elastic/x-pack-elasticsearch@5fd20690b8
This adds basic info about jobs and datafeeds, sufficient
for the first release for supporting monitoring and
phone-home.
In particular, usage now displays:
- job count for _all and by state
- detectors statistics (min, max, mean, total) for _all jobs and by job state
- model size statistics (min, max, mean, total) for _all jobs and by job state
- datafeed count for _all and by state
Relates elastic/x-pack-elasticsearch#660
Original commit: elastic/x-pack-elasticsearch@6e0da6c3db
This change cleans up some NORELEASE comments that are either no longer relevant or actually should be TODO comments
Original commit: elastic/x-pack-elasticsearch@9947f1176e
Submit job updates to a concurrent queue when job update has been processed by ClusterService. Then from a background thread delegate the job updates to the node running the autodetect process. This maintains the same order as how the job config updates have occurred to the cluster state and thus preventing job config updates to the same job to arrive in the wrong order to the job's autodetect process. (the expectation is that in practise this will rarely happen)
The behaviour of the update api changes with this pr, because the api now returns when the update has been made to cluster state, whereas before it would return when the update was made to the autodetect process too. Updating the autodetect process happens in the background. I think that this change in behaviour is acceptable.
Use TP#scheduleWithFixedDelay(...) instead of TP#schedule(...) and
removed the custom rescheduling and cancelling.
Also changed LocalNodeMasterListener#executorName to SAME
Original commit: elastic/x-pack-elasticsearch@c24c0dd7d7
If a persistent task throws an exception, the persistent tasks framework will no longer try to restart the task. This is a temporary measure to prevent threshing the cluster with endless restart attempt. We will revisit this in the future version to make the restart process more robust. Please note, however, that if node executing the task goes down, the task will still be restarted on another node.
Original commit: elastic/x-pack-elasticsearch@30712e0fbf