Following the changes of elastic/x-pack-elasticsearch#2975 the hard limit on the number of ML jobs
per node is no longer the only limiting factor. Additionally there is
now a limit based on the estimated memory usage of the jobs, and this is
expected to provide a more sensible limit that accounts for differing
resource requirements per job.
As a result, it makes sense to raise the default for the hard limit on
the number of jobs, on the assumption that the memory limit will prevent
the node becoming overloaded if an attempt is made to run many large jobs.
Increasing the hard limit will allow more small jobs to be run than was
previously the case by default.
Of course, this change to the default will have no effect for customers
who have already overridden the default in their config files.
Original commit: elastic/x-pack-elasticsearch@9fed1d1237
This change modifies the way ML jobs are assigned to nodes to primarily
base the decision on the estimated memory footprint of the jobs. The
memory footprint comes from the model size stats if the job has been
running long enough, otherwise from the model memory limit. In addition,
an allowance for the program code and stack is added.
If insufficient information is available to base the allocation decision on
memory requirements then the decision falls back to using simple job
counts per node.
relates elastic/x-pack-elasticsearch#546
Original commit: elastic/x-pack-elasticsearch@b276aedf2f
* [DOCS] Clarify ML node settings re transport requests
* [DOCS] Clarify xpack.ml.enabled based on feedback
Original commit: elastic/x-pack-elasticsearch@3102d1e3f3