Allow deletes to proceed even if index is missing
Also adds some tests. All non-IndexNotFound exceptions will still abort the delete.
We can revisit this if we find other edge-cases.
Original commit: elastic/x-pack-elasticsearch@823d00d8a7
and FixBlockingClientOperations in two places where blocking client calls are ok,
because these methods aren't called from a network thread.
Original commit: elastic/x-pack-elasticsearch@a6dc34651c
Merged categoryDefinition(...) into categoryDefinitions(...) as the two did similar things. The get call has been replaced with a search with a query on the _uid field and routing on category id, so that the response handling code can be reused.
Original commit: elastic/x-pack-elasticsearch@4243917b00
The start scheduler waits until the scheduler state has been set to started before returning.
Before this change after the scheduler state has been set to started, the scheduler would link itself to the task the start scheduler api has created.
If the stop scheduler api was called immediately after the start scheduler api then this could lead the stop scheduler api cancelling the task without
stopping the scheduler, as the scheduler could not have been linked to the task.
Now the scheduler gets linked to the task before the scheduler state is set to started, fixing the problematic situation discribed above.
Original commit: elastic/x-pack-elasticsearch@8334ae1967
Also merged the JobProvider#getBucket(...) method into Jobprovider#getBuckets(...) method, because
it contained a lot of similar logic, otherwise it had to be converted to use non blocking client calls too.
Part of elastic/elasticsearch#127
Original commit: elastic/x-pack-elasticsearch@b1e66b62cb
There was an N-squared algorithm in the state processing code, leading
to large state persistence eventually timing out. Large state documents
are read from the network in 8KB chunks, and the old code was checking
ALL previously read chunks for separators every time a new chunk was read.
Fixeselastic/elasticsearch#635
Original commit: elastic/x-pack-elasticsearch@c814858c2c
Deleting a job now starts a three-step process:
1. Job status updated to DELETING
2. Physical index is deleted
3. Job removed from cluster state
When jobs are in DELETING, they cannot be modified/updated/changed at all. Only jobs that are DELETING can actually be removed from the CS.
Original commit: elastic/x-pack-elasticsearch@2cd99a240c
with the fix we also make sure that prelert metatadata is taken into account when verifying the cluste state consistency
Original commit: elastic/x-pack-elasticsearch@1deaec3836
Note that the change in elasticsearch allowed us to store scheduler
config's query and scriptFields as typed objects instead of
ByteReference.
Original commit: elastic/x-pack-elasticsearch@38c5aef2ef
* Check the bulk request contains actions before executing.
This suppresses an validation exception about no requests being added.
* Persist bulk request before refreshing the indexes on a flush acknowledgment
Original commit: elastic/x-pack-elasticsearch@22543e46c8
* Put model state in the .mlstate index
* Revert results index rename
* Put ModelSnapshots in the results index
* Change state index in C++
* Fix logging
* Rename state index ‘.ml-state’
Original commit: elastic/x-pack-elasticsearch@dbe5f6b525
* Strict parse search parts of schedulerConfig
This commit adds methods to build the typed objects
for the search parts of a scheduler config. Those are:
query, aggregations and script_fields.
As scheduler configs are stored in the cluster state and parsing
the search parts requires a SearchRequestParsers object, we cannot
store them as typed fields. Instead, they are now stored as
BytesReferences.
This change is in preparation for switching over to using a
client based data extractor.
Point summary of changes:
- query, aggregations and scriptFields are now stored as BytesReference
- adds methods to build the corresponding typed objects
- putting a scheduler now builds the search parts to to validate that
the config is valid
Relates to elastic/elasticsearch#478
Original commit: elastic/x-pack-elasticsearch@e6d5a85871
This doesn't happen initially when buckets are output by the C++, but
buckets can contain records at the moment they're sent for persistence
during normalization or during integration tests. It's safest if the
persistence code specifically doesn't persist these records.
Original commit: elastic/x-pack-elasticsearch@a93135d8c0
* Add job config option to set the index name
* Check index does not already exist if ’index_name’ is set
* Don’t create alias if ‘index_name’ is the same as ‘job_id’
* Default index_name value
Set it to job_id if null and only create the index alias if job_id != index_name
* Fix compile errors after rebasing
* Address review comments
* Test if the index exists by checking the cluster state
* Update comment
Original commit: elastic/x-pack-elasticsearch@a3e7f1a5bb