- stops the datafeed when post/flush throw a conflict exception.
A conflict exception signifies the job state is not opened, thus
we are better off stopping the datafeed.
- handles flushing the job the same way as posting to the job.
relates elastic/x-pack-elasticsearch#855
Original commit: elastic/x-pack-elasticsearch@49a54912c2
Makes the log more readable in editors not set to UTF-8.
Customers may well be in this situation on Linux/Windows.
Original commit: elastic/x-pack-elasticsearch@4e59fc90cf
The commit changes how LocalExporterTests stops: it now uses the
node_stats document collected on each node and check if it's older
than a given number of seconds (10). It also removes log traces.
Original commit: elastic/x-pack-elasticsearch@0384690b41
Before this change the persistent task operations related to opening
and closing jobs would time out a long time before the operations
related to native processes.
Original commit: elastic/x-pack-elasticsearch@23076b773b
Changes the logging of LDAP authentication failures from "always" to "only if the user failed to be authenticated"
Previously there were cases (such has having 2 AD realms) where successful user authentication would still cause an INFO message to be written to the log for every request.
Now that message is suppressed, but a WARN message is added _if-and-only-if_ the user cannot be authenticated by any realm.
This is implemented via a new value stored in the ThreadContext that the AuthenticationService choses to log (or not log) depending on the result of the authenticate process.
Closes: elastic/x-pack-elasticsearch#887
Original commit: elastic/x-pack-elasticsearch@b81b363729
The PR detects if SMILE is being provided, then correctly slices the stream such that each document is parsed individually. This is required because jackson's SMILE parser is stricter than it's JSON parser and will stop parsing when it hits a streamSeparator (unlike JSON, which will eagerly try to find more objects to parse).
Removes the forced-headers from the various REST tests.
relates elastic/x-pack-elasticsearch#642
Original commit: elastic/x-pack-elasticsearch@c0e97cd545
Instead of having a separate listener for indicating that the current task is finished, this commit is switching to use allocated object itself.
Original commit: elastic/x-pack-elasticsearch@7ad5362121
`PersistentTasksExecutor#getAssignment(...)` should be a cheap and side-effect free method,
but in case of `OpenJobPersistentTasksExecutor` and `StartDatafeedPersistentTasksExecutor` before this change it would index a document each time `getAssignment(...)` was invoked
Original commit: elastic/x-pack-elasticsearch@5ca5890baf
The change applies chunking by default on aggregated datafeeds.
The chunking is set to a manual mode with time_span being
1000 histogram buckets.
The motivation for the change is two-fold:
1. It helps to avoid memory pressure/blowing.
Users may perform a lookback on a very long period of time. In that
case, we may hold a search response for all that time which could
include too many buckets. By chunking, we avoid that situation
as we know we'll only keep results for 1000 buckets at a time.
2. It makes cancellation more responsive.
In elastic/x-pack-elasticsearch#862 we made the processing of a search response cancellable in a
responsive manner. However, the search phase cannot be cancelled at
the moment. Chunking makes the search phase shorter, which will
result to a better user experience when they stop an aggregated
datafeed.
Also note the change sets the default chunking_config on datafeed
creation so the setting is no longer hidden.
Relates to elastic/x-pack-elasticsearch#803
Original commit: elastic/x-pack-elasticsearch@ae8f120f5f
When a datafeed task is created but it cannot be assigned the task
has a null status. This means _stats report it as stopped, however
deleting it fails. In addition, it's a better experience to error
the start datafeed request all together and give the user the chance
to fix his data indices.
This change fails a datafeed-start if it cannot be assigned.
relates elastic/x-pack-elasticsearch#1018
Original commit: elastic/x-pack-elasticsearch@532288fda0
Retries should be already handled by TransportMasterNodeAction, there is no need to introduce another retry layer in Persistent Tasks code.
Original commit: elastic/x-pack-elasticsearch@967ac7f7fa
This commit changes how LocalExporterTests stops the monitoring
components: it first stops the monitoring service (but keeps the
local exporter enabled), deletes and checks if monitoring indices
are recreated, and then disables the local exporter.
Original commit: elastic/x-pack-elasticsearch@4c4809a660
Closing a job may take a while. In the meantime it is possible to start a datafeed, because before this change the job state remained OPENED.
With this change when the executor node receives the close job request, it will first set the status to CLOSING and after that closes the job (closing autodetect process, etc.).
relates elastic/x-pack-elasticsearch#990
Original commit: elastic/x-pack-elasticsearch@d8d89c0756
This commit removes the smoke-test-monitoring-with-security project
and replaces it with a REST test.
Original commit: elastic/x-pack-elasticsearch@f1665815c2
The execution has diverged too much from post data, flush and update process apis, since the close all jobs have been added.
The logic is now easier to understand as it exist in a single source file instead of in both CloseJobAction and TransportJobTaskAction.
Original commit: elastic/x-pack-elasticsearch@daf5fabad5
Users currently have difficulty diagnosing authentication failures.
Some logging messages mislead them, and in other cases there are unexpected behaviours that are not logged at all.
These additional DEBUG log messages and change some existing messages in an attempt to alleviate that problem.
Original commit: elastic/x-pack-elasticsearch@c6ea98b038
Increase the timeout to give enough time for a datafeed to
stop smoothly.
This is the second step to avoid hitting the default timeout.
The first was ensuring aggregated datafeed is cancellable in
a responsive manner. The third and final step will be to
apply chunking in aggregated datafeeds in order to shorten
the duration of the search, which will make cancellation even
more responsive.
Relates elastic/x-pack-elasticsearch#803
Original commit: elastic/x-pack-elasticsearch@db642330ec
This commit restores the ability to build x-pack-elasticsearch without issues when running without
access to the internet. When the `--offline` flag is used, we will not try to contact vault and the
aws apis to retrieve the ml-cpp binaries but instead gradle will use a cached version even though
it may be expired.
relates elastic/x-pack-elasticsearch#726
Original commit: elastic/x-pack-elasticsearch@b0915d8fa9
This is analagous of the bwc-zip for elasticsearch. The one caveat is
due to the structure of how ES+xpack must be checked out, we end up with
a third clone of elasticsearch (the second being in :distribution:bwc-zip).
But the rolling upgrade integ test passes with this change.
relates elastic/x-pack-elasticsearch#870
Original commit: elastic/x-pack-elasticsearch@34bdce6e99
This commit renames and moves the forked delete by query classes from being ml specific to being a
xpack common class since an upcoming security feature plans to make use of this. Additionally, this
commit fixes a issue where the dbq action was being executed by the calling user instead of the
xpack user for certain requests. This was found when adding a authorization change that restricts
this action's execution to the xpack user only.
Original commit: elastic/x-pack-elasticsearch@d5967e7255