This matches the way tests that need to run without an Elasticsearch
bootstrap are run in core Elasticsearch. This should make merging to
x-pack easier.
Note that the no bootstrap tests now run after the integration tests, but
this doesn't really matter.
Original commit: elastic/x-pack-elasticsearch@5547f457b6
The bulk request needed resetting after it was executed otherwise stale documents are persisted repeatedly after they have been updated causing a versioning error
Original commit: elastic/x-pack-elasticsearch@263fa9d25d
* Gets build to use elasticsearch-extras
Also adds ci script for building repo on CI servers
To use this change you need to:
1. Clone elasticsearch: `git@github.com:elastic/elasticsearch.git`
2. create a directory at the same level as elasticsearch called `elasticsearch-extra`
3. Clone this repository into the `elasticsearch-extra` directory
4. Run `gradle build` from the `elasticsearch-extra/prelert-legacy` directory or run `gradle :prelert-legacy:build` from the `elasticsearch directory
* Adds USE_SSH option to ci script
* iter
Original commit: elastic/x-pack-elasticsearch@ea127dfef0
The job open api starts a task and ties that AutodetectCommunicator.
The job close api is a sugar api, that uses the list and cancel task api to close a AutodetectCommunicator instance.
The flush job and post data api redirect to the node holding the job task and then delegate the flush or data to the AutodetectCommunicator instance.
Also:
* Added basic multi node cluster test.
* Fixed cluster state diffs bugs, forgot to mark ml metadata diffs as named writeable.
* Moved waiting for open job logic into OpenJobAction.TransportAction and moved the logic that was original there to a new action named InternalOpenJobAction.
Original commit: elastic/x-pack-elasticsearch@194a058dd2
* removes upload pack task from build
This is preventing us from being an elasticsearch-extra project and we cannot have this task when we move to x-pack. Once we are in X-Pack the unified build will be uploading the final artifact so for now we will change the CI build to add a build step to upload the pack artifact.
* Removes OS specific stuff from the build
the CPP_LOCAL_DIST will now look for any `ml-cpp` artifacts for the same version in the specified directory.
* review corrections
Original commit: elastic/x-pack-elasticsearch@be15e55ddb
This commit contains some more of the endpoint changes Sophie and Steve
agreed with Clint:
1. get_jobs_stats renamed to get_job_stats
2. Revert snapshot must now be done using an ID - other options removed
3. Renamed "categorydefinitions" to "categories" in endpoints
4. get_jobs now has an implicit _all if no job ID/wildcard is specified
5. There is an option to retrieve a specific model snapshot by ID in
get_model_snapshots
Relates elastic/elasticsearch#630
Original commit: elastic/x-pack-elasticsearch@9dd71c64a8
This change prepares for elastic/elasticsearch/elastic/elasticsearch#22575, where we don't have ClusterService available in rest actions.
Original commit: elastic/x-pack-elasticsearch@87658c7fe8
This commit performs the following improvements:
- the time field is always requested as doc_value. This makes
specifying a time format for scheduled jobs unnecessary.
- adds DataDescription as a param to the PostDataAction. When set,
it overrides the job's DataDescription. This allows the scheduler to
override the job's DataDescription since it knows the data format (JSON)
and the time format (epoch_ms). This is not exposed in the REST API to
discourage users from using it.
- by default, data extractor search now requests doc_values for analysis fields. This is
expected to result in increased performance.
- a `_source` field is added to the scheduler config. This needs to be
set to true when one or more of the analysis fields do not have
doc_values.
- the ELASTICSEARCH data format is removed as is now redundant.
- fixes the usage of `script_fields`. Previously, setting
`script_fields` would result to none of the source to be returned. Thus,
is the analysis fields were a mixture of script and non-script fields it
would not work.
- ensures nested fields are handled properly
Closeselastic/elasticsearch#679, Closeselastic/elasticsearch#267
Original commit: elastic/x-pack-elasticsearch@fed35ed354
NB: The actual C++ code will be deleted in a separate commit to
avoid swamping this commit.
If you want to have the Java build pick up locally built C++ then:
export CPP_LOCAL_DISTS=$CPP_SRC_HOME/build/distributions
Otherwise, C++ artifacts will be downloaded from S3.
Original commit: elastic/x-pack-elasticsearch@246672e81d
If scheduled job concurrently gets stopped from within (e.g. lookback) and externally via the stop scheduler api then make sure to execute the stop logic only once.
Original commit: elastic/x-pack-elasticsearch@505c44f515
The _all field is now deprecated and disabled by default in elasticsearch
6.0.0. We no longer need to disable it explicitly.
Original commit: elastic/x-pack-elasticsearch@c71465083a
When a user makes a GET request to retrieve all resources of a type
(e.g. anomaly_detectors) and none exists, the response should be an
empty array with 200 status code. This commit fixes this issue for:
* anomaly_detectors and _stats
* schedulers and _stats
* lists
* buckets
All other GETs work fine already.
Original commit: elastic/x-pack-elasticsearch@4daaa91aa4
I thought QUERY_AND_FETCH was the most efficient for the data extractor
but it does not work with sorting. It causes all shard results to be
returned before sorting and thus we may get out-of-order errors.
This commit switches to the default search type.
Original commit: elastic/x-pack-elasticsearch@d8a8155973
* Extract method ScheduledJob#postData
* Remove unreachable else statement
* Restrain usage of DataExtractor in a single thread
Original commit: elastic/x-pack-elasticsearch@5b9b310d9d
* prelert to ml
* Prelert to Ml
* PRELERT to ML
Exceptions:
* prelert.com - because it generally appears in links to our website, and
although these will eventually break it will be possible for people to see
what was there using https://archive.org/web/
* PRELERT_AWS_ACCESS_KEY_ID and PRELERT_AWS_SECRET_ACCESS_KEY - because it
creates a knock-on effect on infra that will be temporary anyway because once
we're in x-pack we'll use x-pack keys
* prelert-artifacts - this is the name of the s3 bucket we're currently using
and you cannot rename s3 buckets - as with the access keys it will become
obsolete when we merge to x-pack so there's no point changing it now
* prelert-legacy - the name of our legacy Git repo has not changed
Original commit: elastic/x-pack-elasticsearch@720e83c7f2