* [DOCS] Model plot updates
Add to job create.
Remove terms from job resource.
* [DOCS] Describing terms as experimental
Original commit: elastic/x-pack-elasticsearch@815fa0ec37
* [DOCS] Add ML info about script fields
* [DOCS] Add links to ML script fields page
* [DOCS] Add ML API examples to transforms.asciidoc
* [DOCS] Addressed feedback in ML script field examples
* [DOCS] Add preview to ML script fields example
* [DOCS] Expanded code snippets in ML transform examples
* [DOCS] Add output for ML scripted fields example
* [DOCS] Add output for more ML scripted field examples
* [DOCS] Add output for final ML scripted field examples
* [DOC] Add Kibana details for ML script fields
* [DOCS] Remove example from ML transforms
Original commit: elastic/x-pack-elasticsearch@51057b029f
This changes the validation criteria we use for user and role
names in the file realm, native realm, and the
realm-agnostic code in x-pack security. The new criteria is:
A valid username's length must be at least 1 and no more than 1024
characters. It may not contain leading or trailing whitespace. All
characters in the name must be be alphanumeric (`a-z`, `A-Z`, `0-9`),
printable punctuation or symbols in the https://en.wikipedia.org/wiki/Basic_Latin_(Unicode_block)[Basic Latin (ASCII) block],
or the space character.
Original commit: elastic/x-pack-elasticsearch@f77640f269
* [DOCS] Update ML APIs for Elasticsearch Reference
* [DOCS] Add X-Pack icon for ML APIs
* [DOCS] Add role attribute to ML APIs
Original commit: elastic/x-pack-elasticsearch@997ea39759
Prior to this change, if the persistent tasks framework noticed that a
job was running on a node that was isolated but has rejoined the cluster
then it would close that job. This was not ideal, because then the job
would persist state from the autodetect process that was isolated. This
commit changes the behaviour to kill the autodetect process associated
with such a job, so that it does not interfere with the autodetect process
that is running on the node where the persistent tasks framework thinks it
should be running.
In order to achieve this a change has also been made to the behaviour of
force-close. Previously this would result in the autodetect process being
gracefully shut down asynchronously to the force-close request. However,
the mechanism by which this happened was the same as the mechanism for
cancelling tasks that end up running on more than one node due to nodes
becoming isolated from the cluster. Therefore, force-close now also kills
the autodetect process rather than gracefully stopping it. The documentation
has been changed to reflect this. It should not be a problem as force-close
is supposed to be a last resort for when normal close fails.
relates elastic/x-pack-elasticsearch#1186
Original commit: elastic/x-pack-elasticsearch@578c944371
This commit adds a new Logstash component to x-pack to support the config management work. Currently, the functionality in this component is really simple; all it does is upload a new index template for `.logstash` index. This index stores the actual LS configuration.
On this template is bootstrapped in ES, Kibana can write user-created LS configs which adhere to the mapping defined here. In the future, we're looking into adding more functionality on the ES side to handle config documents, but for now, this is simple.
relates elastic/x-pack-elasticsearch#1499, relates elastic/x-pack-elasticsearch#1471
Original commit: elastic/x-pack-elasticsearch@d7cc8675f7
* [DOCS] Add ML categorization of messages
* [DOCS] Describe ML categorization_examples_limit property
* [DOCS] Updated ML categorization of messages
* [DOCS] Add links to ML categorization
Original commit: elastic/x-pack-elasticsearch@6403f6ce84
When a user or client intend to delete a datafeed
and its job, there is benefit into ensuring the
datafeed has gracefully stopped (ie no data loss).
In constrast, the desired behaviour is to stop and
delete the datafeed as quickly as possible.
This change adds a force option to the delete
datafeed action. When the delete is forced,
the datafeed is isolated, its task removed and,
finally, the datafeed itself is removed from the
metadata.
relates elastic/x-pack-elasticsearch#1533
Original commit: elastic/x-pack-elasticsearch@5ae0168bf2
* Add force delete job option
* Can’t kill a process on a 5.4 node
* Address review comments
* Rename KillAutodetectAction -> KillProcessAction
* Review comments
* Cancelling task is superfluous after it has been killed
* Update docs
* Revert "Cancelling task is superfluous after it has been killed"
This reverts commit 576950e2e1ee095b38174d8b71de353c082ae953.
* Remove unnecessary TODOs and logic that doesn't alwasys force close
Original commit: elastic/x-pack-elasticsearch@f8c8b38217
Includes:
- Extensive changes to "mapping roles" section
- New section for role mapping API
- Updates to LDAP/AD/PKI realms to refer to API based role mapping
- Updates to LDAP/AD realms: `unmapped_groups_as_roles` only looks at file-based mappings
- Updates to LDAP/AD realms: new setting for "metadata"
Original commit: elastic/x-pack-elasticsearch@6349f665f5
Detectors now have a field called detector_index. This is also now the
field that needs to be supplied when updating a detector. (Previously
it was simply index, which was confusing.)
When detectors are added to an analysis_config it will reassign
ascending detector_index values starting from 0. The intention is
never to allow deletion of detectors from an analysis_config, but
possibly to allow disabling them in the future. This ensures that
detector_index values in results will always tie up with detector_ids
in the detectors that created them.
relates elastic/x-pack-elasticsearch#1275
Original commit: elastic/x-pack-elasticsearch@20a660b07b
* Remove sequenceNum from anomaly records and influencers
* Generate unqiue IDs without sequence numbers
* Remove more instances of sequence_num
* Handle parsing sequnce_num from v5.4
Original commit: elastic/x-pack-elasticsearch@e60b206daf
* [DOCS] Add ML aggregations configuration scenario
* [DOCS] Refine ML configuration page
* [DOCS] Add ML aggregation details
* [DOCS] Add links to aggregations in Configuring ML
* [DOCS] Address feedback about ML aggregations
Original commit: elastic/x-pack-elasticsearch@8474144093
* Add sort parameter for get buckets
* Add secondary sort by time
* Use default values from actions in rest requests
Original commit: elastic/x-pack-elasticsearch@a530c0bed6
* [DOCS] Add script_fields to ML datafeed APIs
* [DOCS] Add datafeedresource.asciidoc to build.gradle
* [DOCS] Addressed feedback in PR 1372
Original commit: elastic/x-pack-elasticsearch@3404ca7850
* [DOCS] Add ML analytical functions
* [DOCS] Add pages for ML analytical functions
* [DOCS] Add links to ML functions from API definitions
Original commit: elastic/x-pack-elasticsearch@ae50b431d3
As fields with underscores will be disallowed in master, and we have to
prepare the upgrade, this commit renames the _status field to status.
When the 5.x upgrade logic is in place in the 5.x we can remove all the
old style _status handling from the master branch.
Note: All the BWC compatibility tests, that load 5.x indices are now
faking a finished upgrade by adding the `status` field to the mapping
of the watches index.
Original commit: elastic/x-pack-elasticsearch@9d5cc9aaec
* [DOCS] Add property table for ML Update Jobs API
* [DOCS] Updates based on feedback for ML Update Jobs API
* [DOCS] Removed detector properties from ML Update Jobs API
* [DOCS] Fixes typos
Original commit: elastic/x-pack-elasticsearch@68d1b5598c
The distribution of watches now happens on the node which holds the
watches index, instead of on the master node. This requires several
changes to the current implementation.
1. Running on shards and replicas
In order to run watches on the nodes with the watches index on its
primaries and replicas. To ensure that watches do not run twice, there is
a logic which checks the local shards, runs a murmurhash on the id and
runs modulo against the number of shards and replicas, this is the way to
find out, if a watch should run local. Reloading happens
2. Several master node actions moved to a HandledTransportAction, as they
are basically just aliases for indexing actions, among them the
put/delete/get watch actions, the acknowledgement action, the de/activate
actions
3. Stats action moved to a broadcast node action, because we potentially
have to query every node to get watcher statistics
4. Starting/Stopping watcher now is a master node action, which updates
the cluster state and then listeners acts on those. Because of this watches
can be running on two systems, if you those have different cluster state
versions, until the new watcher state is propagated
5. Watcher is started on all nodes now. With the exception of the ticker
schedule engine most classes do not need a lot of resources while running.
However they have to run, because of the execute watch API, which can hit
any node - it does not make sense to find the right shard for this watch
and only then execute (as this also has to work with a watch, that has not
been stored before)
6. By using a indexing operation listener, each storing of a watch now
parses the watch first and only stores on successful parsing
7. Execute watch API now uses the watcher threadpool for execution
8. Getting the number of watches for the stats now simply queries the
different execution engines, how many watches are scheduled, so this is
not doing a search anymore
There will be follow up commits on this one, mainly to ensure BWC compatibility.
Original commit: elastic/x-pack-elasticsearch@0adb46e658