Previously force closing a job required extra privileges. Following
the full discussion about what privileges should be required.
Original commit: elastic/x-pack-elasticsearch@4d85314b35
* Removed OPENING and CLOSING job states. Instead when persistent task has been created and
status hasn't been set then this means we haven't yet started, when the executor changes it to STARTED we have.
The coordinating node will monitor cs for a period of time until that happens and then returns or times out.
* Refactored job close api to go to node running job task and close job there.
* Changed unexpected job and datafeed exception messages to not mention the state and instead mention that job/datafeed haven't yet started/stopped.
Original commit: elastic/x-pack-elasticsearch@37e778b585
Add the `.monitoring-alerts-2` index template via the exporter. This
avoids a very common problem where the user wipes out their monitoring
indices manually, which means that the watches would then create an index
with a dynamic mappings.
This adds a mechanism for posting a template that is not associated with a
Resolver (convenient for the forthcoming work _and_ for a future Logstash
index).
Original commit: elastic/x-pack-elasticsearch@a4cfc48191
The stopped and removeOnCompletion flags are not currently used, this commit removes them for now to simplify things.
Original commit: elastic/x-pack-elasticsearch@c636c2817e
Previously a `kill -9` on the `autodetect` process associated with a
job would leave the job in the OPENED state.
Now if the C++ process dies before a request to close the job is made
then the job state is set to FAILED.
For this purpose C++ process death is defined as end-of-file on the
log stream. (Technically it would be possible to get end-of-file on
the log stream while the C++ process was still running, but this
would also represent an unexpected and undesirable situation.)
Original commit: elastic/x-pack-elasticsearch@2b74c56a79
This to avoid to lose data counts when the job gets restarted on another node.
The job stats api returns live data counts, which may not have been persisted to an index,
so getting the data counts via search api will give us a better guarantee that when
the job gets restarted the datacounts are there too. During job restart a get call is being
done to get data counts in the order to initialize the job.
Original commit: elastic/x-pack-elasticsearch@901952da85
Previously the GET/PUT/DELETE filters actions were master node actions. This is not necessary since the filters are stored in an index rather than the cluster state. This change makes the actions extend `HandledTransportAction` so they can be run on any node.
The change also makes PutFilterAction.TransportAction use the TransportBulkAction instead of the deprecated TransportIndexAction.
relates elastic/x-pack-elasticsearch#756
Original commit: elastic/x-pack-elasticsearch@c6df04382e
This commit makes the MonitoringDoc immutable and removes the type() and id() methods from "resolvers" so that they are not anymore in charge of computing the documents types and ids. Now each MonitoringDoc knows its type and is able to compute its own id if needed.
Original commit: elastic/x-pack-elasticsearch@5161cedcc8
This commit marks the x-pack plugin as having a native controller. This
is now a requirement in core for any plugin that forks a native process
to display a warning to the user when they install the plugin.
Relates elastic/x-pack-elasticsearch#839
Original commit: elastic/x-pack-elasticsearch@3529250023
Refactors NodePersistentTask and RunningPersistentTask into a single AllocatedPersistentTask. Makes it possible to update Persistent Task Status via AllocatedPersistentTask.
Original commit: elastic/x-pack-elasticsearch@8f59d7b819
All of our code supports configuring email addresses in the
email action not only via a JSON array, but also via a comma
separated value (we also have tests for this). However in one bit
we did not support this, where an email template is rendered to
a concrete email.
This commit fixes the last piece, so that users will be able to
specify comma separated email adresses.
The main use case for this is having an array of email addresses,
that can be joined in mustache with a comma in order to send to
several recipients.
Original commit: elastic/x-pack-elasticsearch@19794ba612
This commit ensures that upon reopening a job, the in-memory
model size stats are correctly initialized from the ones
last persisted in the results index.
This fixes the bug that could be seen upon opening a job
that has processed data and immediately calling its _stats
API only to see the model size stats are zero.
In addition, this PR refactors getting the parameters needed to
open an autodetect job:
- Previously, there was a method chaining together multiple
callbacks to the job provider.
- These methods were retrieving data via GETs which is not
going to work with index rollover.
Note, this PR is not eliminating all GETs. More work is needed
to fully support index rollover.
relates elastic/x-pack-elasticsearch#801
Original commit: elastic/x-pack-elasticsearch@1ef1d44b32
The xcontent parser was only set to read all data to a map
which did not work, when the returned data was in form of an
array (for example the cat API is doing this, if the response
format is set to JSON).
relates elastic/x-pack-elasticsearch#351
Original commit: elastic/x-pack-elasticsearch@08ad457bf6
WIth the addition of building the backcompat version in elasticsearch, a
shallow clone no longer works, as we need to have eg 5.x branch
available. This change removes the depth argument from cloning
elasticsearch.
Original commit: elastic/x-pack-elasticsearch@3d89fe92ec
If forced, the internal RemovePersistentTasks API is invoked instead of going through
ML. This will remove the task, which should trigger the task framework to do
necessary cleanup.
At that point, the Delete* APIs interpret a missing task as CLOSED/STOPPED,
so they can be removed regardless of the original state.
Original commit: elastic/x-pack-elasticsearch@bff23c7840
* [ML] Support all XContent types in Data API
This changes the POST Data API so that it accepts all XContent types instead of just JSON.
For now the datafeed is restricted to only sending JSON to the POST data API.
* Rename SimpleJsonRecordReader to XContentRecordReader
Also renames `DataFormat.JSON` to `DataFormat.XCONTENT`
* fixes YAML tests
Original commit: elastic/x-pack-elasticsearch@5fd20690b8