Note that the change in elasticsearch allowed us to store scheduler
config's query and scriptFields as typed objects instead of
ByteReference.
Original commit: elastic/x-pack-elasticsearch@38c5aef2ef
* Check the bulk request contains actions before executing.
This suppresses an validation exception about no requests being added.
* Persist bulk request before refreshing the indexes on a flush acknowledgment
Original commit: elastic/x-pack-elasticsearch@22543e46c8
In https://github.com/elastic/elasticsearch/pull/21964, index
and delete operations are executed as single item bulk requests
internally. This means index and delete operations use the
bulk transport endpoints (indices:data/write/bulk[s][p] and
indices:data/write/bulk[s][r]).
This PR adds bulk transport endpoint to 'write' and 'delete'
index privilages and adds index and delete action as composite
actions to delay the authentication to the shard level.
Original commit: elastic/x-pack-elasticsearch@2305fc9ca0
This commit adapts to the change of the method name
ESTestCase#randomPostivieLong to ESTestCase#randomNonNegativeLong.
Original commit: elastic/x-pack-elasticsearch@689429cb54
* Put model state in the .mlstate index
* Revert results index rename
* Put ModelSnapshots in the results index
* Change state index in C++
* Fix logging
* Rename state index ‘.ml-state’
Original commit: elastic/x-pack-elasticsearch@dbe5f6b525
* Strict parse search parts of schedulerConfig
This commit adds methods to build the typed objects
for the search parts of a scheduler config. Those are:
query, aggregations and script_fields.
As scheduler configs are stored in the cluster state and parsing
the search parts requires a SearchRequestParsers object, we cannot
store them as typed fields. Instead, they are now stored as
BytesReferences.
This change is in preparation for switching over to using a
client based data extractor.
Point summary of changes:
- query, aggregations and scriptFields are now stored as BytesReference
- adds methods to build the corresponding typed objects
- putting a scheduler now builds the search parts to to validate that
the config is valid
Relates to elastic/elasticsearch#478
Original commit: elastic/x-pack-elasticsearch@e6d5a85871
This doesn't happen initially when buckets are output by the C++, but
buckets can contain records at the moment they're sent for persistence
during normalization or during integration tests. It's safest if the
persistence code specifically doesn't persist these records.
Original commit: elastic/x-pack-elasticsearch@a93135d8c0
* Update readme to reflect new dev setup directory structure.
* Fix typo in elasticsearch-extra path in readme
* Update gradle exception for x-pack directory structure.
* Make directory path where x-pack must be checked out explicit in the gradle exception
Original commit: elastic/x-pack-elasticsearch@91f1d04542
This "super registry" will eventually replace things like
`IndiciesQueriesRegistry` but for now it is just another thing
to plumb across requests.
Original commit: elastic/x-pack-elasticsearch@da26a42b36
When the index action is used to do some bulk indexing, the single
items of the response were not checked to have been indexed successful.
This could lead to NPEs due to an index response being null when the index
operation had failed. The action was still logged as a success though.
This commit only returns SUCCESS for the action, if all items were indexed
successfully. If all items failed, the result will be FAILED as well. Lastly
the result status PARTIAL_FAILURE is used if there were successful and unsuccessful
index operations.
Additionally some minor cleanups happened, like changing equals/hashcode.
Closeselastic/elasticsearch#4416
Original commit: elastic/x-pack-elasticsearch@692687e1af
The special `-Dtests.jvm.argline` params needed for jdk-9 builds do not
get passed correctly if enclosed within single quotes.
Fix jdk9 target for `dev-tools/ci` script to correctly pass the
-Dtests.jvm.argline parameters.
Relates: elastic/elasticsearch#4428
Original commit: elastic/x-pack-elasticsearch@6cd329b8da
* Add job config option to set the index name
* Check index does not already exist if ’index_name’ is set
* Don’t create alias if ‘index_name’ is the same as ‘job_id’
* Default index_name value
Set it to job_id if null and only create the index alias if job_id != index_name
* Fix compile errors after rebasing
* Address review comments
* Test if the index exists by checking the cluster state
* Update comment
Original commit: elastic/x-pack-elasticsearch@a3e7f1a5bb
Removing the WatchLockService could result in duplication of wids, because of a wrong
call to replace underscores with dashes. As UUIDs.createBase64UUID() can contain underscores
but they are kind of reserved in the Wid class due to handling of watch ids, this just uses
the toString() representation of a random UUID.
Closeselastic/elasticsearch#4422
Original commit: elastic/x-pack-elasticsearch@dceb01ae5e
The $PRELERT_SRC_HOME environment variable is replaced with $CPP_SRC_HOME,
which points to the cpp sub-directory off the repository root
Original commit: elastic/x-pack-elasticsearch@02ef6d6be6
* Persist quantile documents with the jobId in the document Id
* Add job Id to snapshot Id
* Add job Id to categoriser state document Id
* Rename quantiles doc to start with job id as the other state docs do
* Fix restoring categoriser state
Original commit: elastic/x-pack-elasticsearch@3e5d3368b5
This test fails due to a changing cluster state(?). The
test checks that a local exporter is ready and then continues.
However, during the test, we see output similar to:
skipping exporter [_local] as it isn't ready yet
Which indicates that the cluster state has changed and the
exporter does not return a bulk anymore. Hence, the test is
failing although at one point in time it returned a bulk.
By enabling trace logging we should be able to find out
what's going on.
Original commit: elastic/x-pack-elasticsearch@d7e2200dd9
The threadpool that supplies the threads used for job IO cannot be
resized, so the number of jobs cannot be dynamic either
Original commit: elastic/x-pack-elasticsearch@c584bf7147
The watch lock service is not really needed, as there is already
a data structure that has information about the currently executing
watches, that can be consulted before executed.
This change will now check, if there is already a watch running with
the current id. If there is not, execution will happen as usual. If
there is however, than a watch record will be created, stating that
the watch is currently being executed - which means that it is either
being executed or in the list of planned executions.
This way users can check in the watch history, if a watch has been executed
more often than it should.
In order to easily search for this, a new execution state called
`NOT_EXECUTED_ALREADY_QUEUED` has been added.
Original commit: elastic/x-pack-elasticsearch@867acec3c3
* Redesign the get anomaly_detectors APIs
This commit redesigns the APIs to get anomaly_detectors.
The new design has 2 GET APIs:
- An API to get the configurations: /anomaly_detectors/{job_id}
- An API to get the stats: /anomaly_detectors/{job_id}/_stats
For both APIs entering "_all" as the job_id returns results for
all jobs.
Note that page params have been removed. They were useful
when the configs were stored in an index. Now that they are part
of cluster state there is no need. Additionally, future support
for wildcard job_id expressions will give users a tool to narrow
down the GET actions to a certain subset of jobs which will be
more useful than the from/size approach.
Follow up:
- Implement similar GET APIs for schedulers
- Remove scheduler_stats from the anomaly_detectors _stats API
as it will be part of the schedulers _stats API
Closeselastic/elasticsearch#548
Original commit: elastic/x-pack-elasticsearch@046a0db8f5