PUT /_xpack/license with no content or content-type should fail with an appropriate error message rather than throwing NPE.
Original commit: elastic/x-pack-elasticsearch@f8c744d2a2
The rest test waited for the watch to run in the background, but there
were no guarantees that this really happened. Also it waited for five
seconds, instead of just executing the watch manually.
relates elastic/x-pack-elasticsearch#2255
Original commit: elastic/x-pack-elasticsearch@56765a649e
These tests have repeating but not reproducible failures,
where the stash is filled with a second PUT operation and the
watcher stats response does not match. Setting the log to trace
should shed some light on this.
As the smoke tests are only four tests this will not lead to a
log explosion.
Relates elastic/x-pack-elasticsearch#1513, elastic/x-pack-elasticsearch#1874
Original commit: elastic/x-pack-elasticsearch@5832dc7990
This change makes 2 improvements to the max_running_jobs setting:
1. Namespaces it by adding the xpack.ml. prefix
2. Renames "running" to "open", because the "running" terminology
is not used elsewhere
The old max_running_jobs setting is used as a fallback if the new
xpack.ml.max_open_jobs setting is not specified. max_running_jobs
is deprecated and (to ease backporting in the short term) will be
removed from 7.0 in a different PR closer to release of 7.0.
Relates elastic/x-pack-elasticsearch#2185
Original commit: elastic/x-pack-elasticsearch@18c539f9bb
These tests used to fail rarely, because during a watch execution
one of the watcher shards was relocated resulting in a second execution
of watch.
In order to prevent this, the tests do not need to actually create any
shards, which causes watcher potentially to be rebalanced.
This simplifies and speeds up the test as well.
relates elastic/x-pack-elasticsearch#1608
Original commit: elastic/x-pack-elasticsearch@1cfac1145d
This cleans up logging, when starting several elasticsearch instances,
as otherwise you cannot see, which node emits this log message.
Original commit: elastic/x-pack-elasticsearch@c8c2819d86
When a watch is executed, it sends an update request to the watch to
udpate its status.
This update request also updates the status.state field, which contains
information, if the watch is active. If the watch gets executed, and
during execution a watch gets disabled, then the current execution will
set the watch back to active.
This commit fixes the current behaviour and never changes the state of
a watch when updating the status after executing, allowing
activate/deactivate calls to work as expected, regardless if a watch
is being executed.
This will fix not only the current behaviour but also some flaky tests.
Original commit: elastic/x-pack-elasticsearch@ca69109ecb
* [DOCS] Add user setup to X-Pack install info
* [DOCS] Add TSL steps to X-Pack install
* [DOCS] Clarify SSL settings in Xpack install
Original commit: elastic/x-pack-elasticsearch@eee37729ff
* [DOCS] Update APIs for multiple jobs or datafeeds
* [DOCS] Fix syntax diagrams for ML stop/close APIs
* [DOCS] Removed TBD authorization for ML APIs
Original commit: elastic/x-pack-elasticsearch@1a9137a5a7
It is really hard to debug some issues with watcher, when only the
e.getMessage() is returned as failure reasons instead of the whole
stack trace.
This commit gets rid of ExceptionsHelper.detailedMessage(e) and always
returns the whole exception.
This commit also extends the watch history to have all fields named
error be treated like an object to be sure they do not get
indexed. No matter where it's placed in the hierarchy
In addition a few Field interface classes were removed, that only contained parse fields.
relates elastic/x-pack-elasticsearch#1816
Original commit: elastic/x-pack-elasticsearch@b2ce680139
This commit adds the max_running_jobs setting from elasticsearch.yml
into a node attribute called ml.max_open_jobs. Previously there was
an assumption that max_running_jobs would be the same for all nodes in
the cluster. However, during a rolling cluster restart where the value
of the setting is being changed this clearly cannot be the case, and
would cause unexpected/unpredictable limits to be used during the period
when different nodes had different settings.
For backwards compatibility, if another node in the cluster has not added
its setting for max_running_jobs to the cluster state then the old
(flawed but better than nothing) approach is applied, i.e. assume the
remote node's setting for max_running_jobs is equal to that of the node
deciding the job allocation.
Relates elastic/x-pack-elasticsearch#2185
Original commit: elastic/x-pack-elasticsearch@1e62b89183
Validating job groups during parsing results into
the validation error being wrapped into a parse
exception. The UI then does not display the cause of the
error. Finally, it is conceptually not a parse error, so
it belongs outside the parsing phase.
Original commit: elastic/x-pack-elasticsearch@a03f002bdc
In order to not restart watcher on every test, this checks
if this is a watcher test and only restarts watcher in that case.
In addition this also checks if watcher is not marked as started.
as otherwise restarting does not make sense. Lastly, this waits until
watcher is marked as started before proceeding.
Original commit: elastic/x-pack-elasticsearch@a8d72f3ebb
Only unit tests were broken. Production ML code was always terminating
bulk requests with newlines.
Original commit: elastic/x-pack-elasticsearch@96ed06fed3
If one of the old watcher templates does not exist when we try
to delete it, the upgrade should just continue.
Original commit: elastic/x-pack-elasticsearch@6a52bad329
This removes the `IndicesStatsCollector` and, instead, it reuses the superset version of the call from the `IndexStatsCollector`.
On clusters with a large number of indices, this should actually help a good amount in reducing wasted calls and memory allocation without any difference in the output.
Original commit: elastic/x-pack-elasticsearch@93b09878e4
This commit re-enables the OpenLDAP tests that were previously running against a one-off instance
in AWS but now run against a vagrant fixture. There were some IntegTests that would run against the
OpenLDAP instance randomly but with this change they no longer run against OpenLDAP. This is ok as
the functionality that is tested by these has coverage elsewhere.
relates elastic/x-pack-elasticsearch#1823
Original commit: elastic/x-pack-elasticsearch@ac9bc82297
record_count is no longer written to new results, but is still tolerated
for backwards compatibility. However, in the backwards compatibility case
the results index must already contain the required mapping. There's no
need to add this mapping to newly created results indices.
Original commit: elastic/x-pack-elasticsearch@e586f3ba96
Fix TemplateTransformMappingTests to work, even if date rolls over
during execution.
Reenable test in BootStrapTests, was forgotten.
Remove the SecurityF/MonitoringF/WatcherF classes, as there is a gradle
command to easily start elasticsearch with xpack
Remove HasherBenchmark, as it is not a test and relies on RandomContext
that is not available anymore (also I think a JMH benchmark would be
needed here).
Remove ManualPublicSmtpServersTester, was not usable anymore.
Remove OldWatcherIndicesBackwardsCompatibilityTests, now in dedicated
rolling upgrade tests.
Remove unused EvalCron class.
Original commit: elastic/x-pack-elasticsearch@100fa9e9b0