The commit that converted the results index into single type
broke the search for fetching results for renormalization.
This commit fixes that.
Original commit: elastic/x-pack-elasticsearch@1ca7517adc
REST endpoints that support GET and POST need
to also support source parsing. As these
endpoints can accept a body but some clients
do not allow doing a GET with a request body,
elasticsearch has support for parsing via a
source URI parameter. This commit adds source
handling to all such endpoints.
relates elastic/x-pack-elasticsearch#1204
Original commit: elastic/x-pack-elasticsearch@3949ea31fe
Previously we used to normalize records with their buckets. This required
nested scrolls: an outer scroll over buckets, then a nested scroll for
records in each bucket. This was fragile.
The new approach is to simply scroll first through buckets, then through
records. This is made possible because we no longer store max_record_score
on buckets nor bucket anomaly_score on records.
While making these changes I noticed that the PerPartitionMaxProbabilities
class was redundant (because it was storing max_record_score in the case of
per-partition normalization), so I removed it. I also removed a redundant
Map from the Bucket class and fixed its equals() and hashCode() methods.
relates elastic/x-pack-elasticsearch#1115
Original commit: elastic/x-pack-elasticsearch@efbee63573
Deleting a job issues 2 cluster state updates.
The first marks the job as deleted.
The second actually removes the job.
Both check that there is no datafeed referring to the job.
If a put datafeed request arrives between those 2 cluster
state updates, the datafeed gets created and the final
job cluster state update fails. This means we end up with
both the job and the datafeed, but the job's results and
state have been deleted.
This commit changes the behaviour so that the put
datafeed request fails for a job that is marked as deleted
as this scenario is avoiding partially executing an action.
relates elastic/x-pack-elasticsearch#1510
Original commit: elastic/x-pack-elasticsearch@76fa0f0b1a
This commit changes all results to use the single doc type.
All searches were adjusted to work without the need to specify
type which ensures BWC with 5.4 results.
Additional work is needed to put the new doc mapping in indices
created with 5.4 but it will be done in separate PR.
Relates elastic/x-pack-elasticsearch#668
Original commit: elastic/x-pack-elasticsearch@041c88ac2d
* Reduced a longish timeout to a shorter one, as a watch should be
executed in a HTTP test.
* Ensured that the TimeThrottleIntegration tests only query for own
watches in the watch history, also use random names for watch ids
* HipChatServiceTests configured deprecated logging package, so it was
not possible to follow the HTTP calls to the hipchat service endpoint.
relates elastic/x-pack-elasticsearch#1514
Relates elastic/x-pack-elasticsearch#1515
Original commit: elastic/x-pack-elasticsearch@adb492e4e9
* Remove sequenceNum from anomaly records and influencers
* Generate unqiue IDs without sequence numbers
* Remove more instances of sequence_num
* Handle parsing sequnce_num from v5.4
Original commit: elastic/x-pack-elasticsearch@e60b206daf
This commit simplifies the WatchStatsTests to use a MockScriptPlugin.
The latch script engine previously depended on a static instance of the
engine to contain the latches. These are now moved to statics of the
test class itself.
Original commit: elastic/x-pack-elasticsearch@4170cd1bd3
This commit enables security to work with an index named .security (as
it could before) OR an alias named .security that points to a concrete
index by a different name that has the security index. This prepares
the ability to migrate from a 5.x to 6.x security index that allows
changing and re-indexing the underlying security index while maintaining
a .security alias that points to the underlying updated index.
relates elastic/x-pack-elasticsearch#1216
Original commit: elastic/x-pack-elasticsearch@9fee12e5a0
Suppress many job/datafeed errors if a node is known to be shutting down. Also, ensure started datafeeds and open jobs don't end up stopped/failed due to errors as the shutdown progresses, as this would prevent them automatically relocating to a different node.
relates elastic/x-pack-elasticsearch#1114
Original commit: elastic/x-pack-elasticsearch@e56a7dbea1
Set job to failed if autodetect manager fails closing, fix force closing of jobs that hang in closing
state, set timeout when waiting for clusterstate update, disallow closing of failed jobs with normal
close
relates elastic/x-pack-elasticsearch#1453
Original commit: elastic/x-pack-elasticsearch@493cf85e22
Aliases might be contained in requests that check index permissions
to disable caches etc. This commit preserves permissions for
aliases as well.
Original commit: elastic/x-pack-elasticsearch@233195aeba
The authentication object was changed in 5.4.0 in that it was conditionally signed depending on
the version and other factors. A bug was introduced however that causes the authentication to
actually get written with the version of the node it is being sent to even if that version is
greater than the version of the current node, which causes rolling upgrades to fail.
Original commit: elastic/x-pack-elasticsearch@a718ff8a52
* Add sort parameter for get buckets
* Add secondary sort by time
* Use default values from actions in rest requests
Original commit: elastic/x-pack-elasticsearch@a530c0bed6
This commit ensures the old 5.x index templates are removed
using the existing plugin hook, instead of the self written part.
Original commit: elastic/x-pack-elasticsearch@6faf08d98d
This commit adds compatibility checks while opening a job:
- Checks that jobs without versions (< 5.5) are not opened
- Checks that jobs with incompatible types are not opened
Original commit: elastic/x-pack-elasticsearch@a3adab733e
* Hide ML actions for tribe node client
* Remove unused parameters
* Enable ML actions and rest endpoints for the transport client
* Create the ML components for the transport client
* Add ml transport client tests
Original commit: elastic/x-pack-elasticsearch@509007ca29
* Remove repeated calls to validateAndReturnJobTask
* Wait for closing job
* Refactor resolving job ids
* More close job unit tests
* Don’t finalise closing jobs twice
Original commit: elastic/x-pack-elasticsearch@20616d6f0a
The LocalExporterTests.testLocalExporterFlush() test was removed in elastic/x-pack-elasticsearch#835 when
LocalExporterTests was changed. This test verified that export exceptions are
thrown when monitoring documents are exported using the Monitoring Bulk API but
the underlying monitoring indices are closed.
This commit reintroduces this test, but as a REST test this time.
relates elastic/x-pack-elasticsearch#416
Original commit: elastic/x-pack-elasticsearch@0a42f9a1be
This is the xpack side of elastic/elasticsearch#24447. The one caveat is
there are a number of places within the xpack api that use
BytesReference to pass down to templates. These are convert to encode
into a BytesArray when necessary, so as not to require changing all of
those apis here at once, but they should all be convert to String as
well.
Original commit: elastic/x-pack-elasticsearch@8399b9d8c3
For user tokens, we were storing the expiration time as a date with the date time format in the
mappings. Occasionally, we would get CI failures due to parsing the date and having an invalid
format. This change simplifies the user tokens to simply use an Instant and convert it to the epoch
millis when we index the document. This eliminates the need for a date formatter and should ensure
we no longer have issues with parsing the dates.
Additionally, the TokenAuthIntegTests#testExpireMultipleTimes could fail if the token was expired
but the approximation for the expiration time was after the current time. In order to resolve this
we get the exact expiration time from the token and use that in the test.
relates elastic/x-pack-elasticsearch#1255
Original commit: elastic/x-pack-elasticsearch@d4bf53e7bc
When we update a model snapshot we need to write it back to
the exact index where we read it from. This is necessary
for rollover as otherwise we would end up with two different
versions of the same snapshot.
Relates elastic/x-pack-elasticsearch#827
Original commit: elastic/x-pack-elasticsearch@b5d1ab38a7
Support the resolution of remote index names, including those that contain wildcards in the cluster name or index part)
Specifically these work:
- `GET /remote*:foo/_search`
- `GET /*:foo/_search`
- `GET /*:foo,*/_search`
- `GET /remote:*/_search`
- `GET /*:*/_search`
This change assumes that every user is allowed to attempt a cross-cluster search against any remote index, and the actual authorisation of indices happens on the remote nodes. Thus ` GET /*:foo/_search` will expand to search the `foo` index on every registered remote without consideration of the roles and privileges that the user has on the source cluster.
Original commit: elastic/x-pack-elasticsearch@b45041aaa3
As this does not require any reindexing this is easy to fix by just
changing the watch history template.
In addition the old templates are deleted on start up and the new ones
are instantiated.
Original commit: elastic/x-pack-elasticsearch@7e1ad495ad
I broke this by adding more randomization to core's randomTimeValue
method. Now it produces valid time values that don't round trip
properly over the XContent API. This change causes the test to skip
trying to round trip time these sorts of time values.
Original commit: elastic/x-pack-elasticsearch@dcdd588bdb
This is in preparation of introducing a write alias.
It adjusts all requests to persist results to do so
using a method that returns the write alias (even though
it currently returns the same as the read alias).
Relates elastic/x-pack-elasticsearch#827
Original commit: elastic/x-pack-elasticsearch@1358dd8dcf
This commit updates the RunAsIntegTests to randomly assign the superuser role to the user that
is authenticating with the cluster but the request is being run as a different user. This provides
additional validation that the authorization errors are actually coming from the user the request
is running as and not due to the authenticating user's privileges.
Original commit: elastic/x-pack-elasticsearch@c6360d13e6
Each job introduces new fields to the results index matching
the analysis terms. When the job is created, mappings for those
are added explicitly. However, when rollover is introduced, that
will not be the case. This commit prepares for that by adding
dynamic mapping of new fields as keyword.
Relates elastic/x-pack-elasticsearch#827
Original commit: elastic/x-pack-elasticsearch@8f6cd09a71
This was the behaviour in Shield 2.x, but it was accidentally changed during migration to X-Pack 5.x
Original commit: elastic/x-pack-elasticsearch@de0bf5e688
It just wastes 20 seconds while we timeout trying to open named pipes that cannot
possibly have been created.
Original commit: elastic/x-pack-elasticsearch@4e447874f6
fixes build errors. Still ensures that the timestamp is set to 'now' if the parsed logfile misses it.
Original commit: elastic/x-pack-elasticsearch@cf60e8d76b
If invalid job configs are transported to the master node then the root
cause of the validation exception gets reported as a remote_transport_exception,
which is extremely confusing.
This commit moves the validation of job configurations to the first node that
handles the action.
Fixeselastic/x-pack-kibana#1172
Original commit: elastic/x-pack-elasticsearch@5ed59d2a6f
Before searching for documents in monitoring indices, we need to ensure
that they exist and are available.
Original commit: elastic/x-pack-elasticsearch@29db55a1fe
This commit adds a base rest handler for security that handles the license checking in the security
apis. This was done previously in some rest handlers but not all and actually had issues where a
value would be returned but we may not have consumed all of the request parameters, which could
lead to a different response being returned than what we would have expected.
relates elastic/x-pack-elasticsearch#1236
Original commit: elastic/x-pack-elasticsearch@2f1100b64a
Adds defenseagainst broken scrolls to the fetching roles and users.
While Elasticsearch *shouldn't* send broken scrolls it has done so
in the past and when it does this causes security to consume the
entire heap and crash. This changes it so we instead fail the request
with a message about the scroll being broken.
Relates to elastic/x-pack-elasticsearch#1299
Original commit: elastic/x-pack-elasticsearch@dfef87e757
Some JDKs do not support the ECDSA cipher suites that we use in the EllipticCurveSSLTests, which
is the underlying cause of some CI failures. This change ensures there is at least one enabled
ECDSA cipher before testing that a connection can be made.
relates elastic/x-pack-elasticsearch#1278
Original commit: elastic/x-pack-elasticsearch@f6c93d776c
* [ML] Not an error to close a job twice
* Error if job is opening
* Address review comments
* Test closed job isn’t resolved
Original commit: elastic/x-pack-elasticsearch@7da7b24c08
Originally we used to try to get the native code version even when ML was disabled.
However, this has proved to be an annoyance in cases where people running on
unsupported platforms have disabled ML. This commit means we'll only try to get
the native code version if ML is enabled on a node.
Original commit: elastic/x-pack-elasticsearch@778d6708d2
This removes the "node" type from `.monitoring-data-2`. This data is sent to _both_ the time-based and non-time-based indexes for Elasticsearch, but the UI only used the time-based variant already.
This is another step in the process of removing the `.monitoring-data-2` index. There is now only one `_type` left in that index: `cluster_info`, which is used by the UI and phone home stats because it contains the license details _and_ the `stack_stats` (e.g., `xpack_usage`).
Original commit: elastic/x-pack-elasticsearch@2cadb5939d
This is a task towards allowing rollover.
Multiple model_size_stats are stored in order to allow
analytics of the memory usage over time. The job _stats
need to display the latest model_size_stats. Before this
commit, the latest model_size_stats was being stored with
a special ID and it was retrieved using that ID. This
does not lend itself well for rollover as we would end up
with multiple of those special IDs in the rolled indices.
This commit removes the need to store a special model_size_stats
version. Job _stats now retrieve the model_size_stats by searching
for the latest one. It also uses a manual ID for all model_size_stats
in order to maintain a single document per log_time.
Relates elastic/x-pack-elasticsearch#827
Original commit: elastic/x-pack-elasticsearch@b2796e9b08
Now that the Monitoring UI no longer checks the `.monitoring-data-2` index
for Kibana, Logstash, or Beats data, we can stop accepting the duplicated
data in that index (the _exact_ same documents are also indexed into the
time-based index for each product).
This ignores rather than rejects requests that contain such documents to
allow older clients to communicate with a 5.5+ monitoring cluster.
Original commit: elastic/x-pack-elasticsearch@def472cf2e
- role_mapping.native is always present, but contains no entries if the security index is unavailable
- file based role mapping does not allow duplicate keys
Original commit: elastic/x-pack-elasticsearch@734cf7e2c0
As fields with underscores will be disallowed in master, and we have to
prepare the upgrade, this commit renames the _status field to status.
When the 5.x upgrade logic is in place in the 5.x we can remove all the
old style _status handling from the master branch.
Note: All the BWC compatibility tests, that load 5.x indices are now
faking a finished upgrade by adding the `status` field to the mapping
of the watches index.
Original commit: elastic/x-pack-elasticsearch@9d5cc9aaec
In some cases (based on the randomisation) the primary-realm would have no mappings, but the secondary-realm would.
If this occurred we wouldn't write out any mappings and the secondary-realm would not authorize the access being tested
Original commit: elastic/x-pack-elasticsearch@8e81ec9dd7
This introduces a role-mapping API to X-Pack security.
Features:
- A `GET`/`PUT`/`DELETE` API at `/_xpack/security/role_mapping/`
- Role-mappings are stored in the `.security` index
- A custom expression language (in JSON) for expressing the mapping rules
- Supported in LDAP/AD and PKI realms
- LDAP realm also supports loading arbitrary meta-data (which can be used in the mapping rules)
- A CompositeRoleMapper unifies roles from the existing file based mapper, and the new API based mapper.
- Usage stats for native role mappings
Original commit: elastic/x-pack-elasticsearch@d9972ed1da
This is related to elastic/elasticsearch#24412. That commit changed how
ListenableActionFuture implementations are created. This commit
updates x-pack to be compatible with those changes. In particular, all
the usages of ListenableActionFuture in x-pack could be replaced with
PlainActionFuture as the "listening" functionality was not being used.
Original commit: elastic/x-pack-elasticsearch@7c8d8e3df9
JobDataDeleter handles the deletion logic for 3 cases:
1. deleting a model snapshot and its state docs
2. deleting all results after a timestamp
3. deleting all interim results
The last 2 are currently implemented by manually performing
a search and scroll and then adding matching hits in a bulk
delete action. This operation is exactly what delete-by-query
does.
This commit changes JobDataDeleter to use delete-by-query. This
makes the code simpler and less error-prone. The downside is
losing some logging which seems non-critical. Unit tests for
JobDataDeleter are also removed as they are heavily mocked tests,
adding little value and high maintenance cost. This functionality
is tested by integration tests already.
relates elastic/x-pack-elasticsearch#821
Original commit: elastic/x-pack-elasticsearch@7da91332bd
The distribution of watches now happens on the node which holds the
watches index, instead of on the master node. This requires several
changes to the current implementation.
1. Running on shards and replicas
In order to run watches on the nodes with the watches index on its
primaries and replicas. To ensure that watches do not run twice, there is
a logic which checks the local shards, runs a murmurhash on the id and
runs modulo against the number of shards and replicas, this is the way to
find out, if a watch should run local. Reloading happens
2. Several master node actions moved to a HandledTransportAction, as they
are basically just aliases for indexing actions, among them the
put/delete/get watch actions, the acknowledgement action, the de/activate
actions
3. Stats action moved to a broadcast node action, because we potentially
have to query every node to get watcher statistics
4. Starting/Stopping watcher now is a master node action, which updates
the cluster state and then listeners acts on those. Because of this watches
can be running on two systems, if you those have different cluster state
versions, until the new watcher state is propagated
5. Watcher is started on all nodes now. With the exception of the ticker
schedule engine most classes do not need a lot of resources while running.
However they have to run, because of the execute watch API, which can hit
any node - it does not make sense to find the right shard for this watch
and only then execute (as this also has to work with a watch, that has not
been stored before)
6. By using a indexing operation listener, each storing of a watch now
parses the watch first and only stores on successful parsing
7. Execute watch API now uses the watcher threadpool for execution
8. Getting the number of watches for the stats now simply queries the
different execution engines, how many watches are scheduled, so this is
not doing a search anymore
There will be follow up commits on this one, mainly to ensure BWC compatibility.
Original commit: elastic/x-pack-elasticsearch@0adb46e658
Within the same JVM, setting the number of processors available to Netty
can only be done once. However, tests randomize the number of processors
and so without intervention would attempt to set this value multiple
times. Therefore, we need to use a flag that prevents setting this value
in tests.
Relates elastic/x-pack-elasticsearch#1266
Original commit: elastic/x-pack-elasticsearch@d127149725
Many of the tests assume that the trial license has already been generated before the test gets to run. As this is asynchronously triggered upon node
startup, however, there is no guarantee that trial license generation has completed before the tests get to execute, leading to null values when
checking clusterService.state().metaData().custom(LicensesMetaData.TYPE).
Original commit: elastic/x-pack-elasticsearch@d909c9ba95
LicenseManagerServiceTests sometimes fails in jenkins, but fairly
rarely. We don't get useful logs when it does. This cranks up
the log level and adds some more assertions so we can better track
down where the failure comes from.
Relates to elastic/x-pack-elasticsearch#222
Original commit: elastic/x-pack-elasticsearch@3e08725fc7
When a user creates a datafeed, as well as checking they have permission
to create a datafeed we also check that they have permission to search the
indices they've configured the datafeed to search.
Previously this second check was erroneously done for the user who issued
the put_datafeed request, whereas it should be done as the runas user for
that request.
Original commit: elastic/x-pack-elasticsearch@4c35204c66
When we revert to snapshot, if we delete intervening results
we should delete with an open end on the time range for the
case when future data has been posted to the job.
Original commit: elastic/x-pack-elasticsearch@c3f5e8f19e
The one argument ctor for `Script` creates a script with the
default language but most usages of are for testing and either
don't care about the language or are for use with
`MockScriptEngine`. This replaces most usages of the one argument
ctor on `Script` with calls to `ESTestCase#mockScript` to make
it clear that the tests don't need the default scripting language.
Original commit: elastic/x-pack-elasticsearch@c1d05b7357
When the active directory realm was refactored to add support for authenticating against multiple
domains, only the default authenticator respected the user_search.filter setting. This commit moves
this down to the base authenticator and also changes the UPN filter to not include sAMAccountName
in the filter.
Original commit: elastic/x-pack-elasticsearch@d2c19c9bee
- Removes need to handle exception from action methods
- Clearly renames DatafeedJobIT to DatafeedJobsRestIT to distinguish
from DatafeedJobsIT
- Refactors DatafeedJobsIT to reuse MlNativeAutodetectIntegTestCase
Original commit: elastic/x-pack-elasticsearch@5bd0c01391
Cross cluster search uses ClusterSearchShardsAction under the covers.
Without this change, you would need both "read_cross_cluster" and "view_index_metadata" privilegs in order to have permission to execute searches from a remote cluster.
Original commit: elastic/x-pack-elasticsearch@65a6aff329
If a single permission set does not have a query defined then this should be considered as the user
not having document level security for the indices matching that pattern. However, the lack of
document level security was not being taken into account and document level security was being
applied when it should not have been.
Original commit: elastic/x-pack-elasticsearch@f5777c2f37
Many tests in monitoring use the pattern of calling first awaitMonitoringDocsCount, and then doing a search that checks certain properties, assuming
that the doc count is correct at that point. In the presence of replicas, awaitMonitoringDocsCount might not wait for all shard copies to have the
desired property. A subsequent search might then hit a shard where the property does not hold. As these tests randomize the number of replicas
(through the random_index_template), it easier to constrain awaitMonitoringDocsCount just to the primary and then do subsequent checks just by
querying the primary.
Original commit: elastic/x-pack-elasticsearch@4165beb903
This commit creates a single server socket that will be connected to by local sockets. The local
sockets will use the port of the previously stopped ldap server as the local port. This will
prevent the ldap library from establishing a connection. The previous use of server sockets for
this did not work on all operating systems as the backlog parameter has platform specific meaning.
Original commit: elastic/x-pack-elasticsearch@03b6bf39d4
This commit adds a timeout for the expiration of invalidated tokens so that we can expect that the
requests will have been finished before we do the assertions on the internal test cluster.
Original commit: elastic/x-pack-elasticsearch@2928706224
This commit adds a token based access mechanism that is a subset of the OAuth 2.0 protocol. The
token mechanism takes the same values as a OAuth 2 standard (defined in RFC 6749 and RFC 6750),
but differs in that we use XContent for the body instead of form encoded values. Additionally, this
PR provides a mechanism for expiration of a token; this can be used to implement logout
functionality that prevents the token from being used again.
The actual tokens are encrypted using AES-GCM, which also provides authentication. The key for
encryption is derived from a salt value and a passphrase that is stored on each node in the
secure settings store. By default, the tokens have an expiration time of 20 minutes and is
configurable up to a maximum of one hour.
Relates elastic/x-pack-elasticsearch#8
Original commit: elastic/x-pack-elasticsearch@3d201ac2bf
This is an issue where a bucket can have both interim results and
non-interim results, a bucket should never have both at the same time.
The steps to cause this situation are:
1. Flush a running job and create interim results
2. Close that job (this does not delete interim results)
3. Re-open the job and POST data
4. The job will eventually emit a bucket result which mingles with the
existing interim results
Normally interim results are deleted by AutoDetectResultProcessor when a
bucket is parsed following a flush command. Because of the close and
re-opening of the job AutoDetectResultProcessor no longer knows that a
previous flush command creating interim results.
The fix is to always delete interim results the first time
AutoDetectResultProcessor sees a bucket.
relates elastic/x-pack-elasticsearch#1188
Original commit: elastic/x-pack-elasticsearch@5326455f54
In the SessionFactoryLoadBalancingTests, we sometime want a connection to a certain IP and Port to
fail as a way to mock an unresponsive/disconnected LDAP server. The test does this by starting up
multiple LDAP servers and then shutting some down. When the server is shut down the port that it
was bound to is open for another process or test to bind to, which can lead to sporadic failures in
CI. This change is a best effort attempt to prevent this by binding a server socket to the port and
filling its backlog so other connections should fail.
Relates elastic/x-pack-elasticsearch#1195
Original commit: elastic/x-pack-elasticsearch@b31a560c93
The DatafeedJobsIT.testRealtime test fails from time to time.
The test seems to take a long time to execute the flush action
after the lookback. This could make sense as the test produces
a few records over the span of a week with 5 minutes bucket_span.
Thus, flush will end up doing a lot of word to create results
for so many buckets.
This change increases the bucket_span to 1 hour. Hopefully, this
will stop the failures.
Relates elastic/x-pack-elasticsearch#1162
Original commit: elastic/x-pack-elasticsearch@4366907371
This commit fixes the support for elliptic curve certificates that are specified as a PEM file.
These certificates and private keys can now be read properly and a integration test was added to
ensure that TLS also functions correctly with these certificates.
Original commit: elastic/x-pack-elasticsearch@6d6d579c88
This commit reduces spamming of the logs when a common SSL exception is encountered such as a
client not trusting the server's certificate or a plaintext request sent to a channel that expects
TLS traffic.
relates elastic/x-pack-elasticsearch#1062
Original commit: elastic/x-pack-elasticsearch@94959e79f6
This change prevents the situation where cleanup of ML indices immediately
after deleting a job leaves the audit notification in limbo because the index
it was due to be indexed into has been deleted.
Relates elastic/x-pack-elasticsearch#1142
Original commit: elastic/x-pack-elasticsearch@300e9c36ce
Ordinary Kibana users should not have access to the cluster state of ES,
and therefore they should not be able to access ML jobs without explicit
permission.
Original commit: elastic/x-pack-elasticsearch@77273d561a
When a condition is unmet, the ack status of the actions needs to be
resetted again, so that new alerts can be triggered.
Due to a bugfix this functionality was removed from ES 5.0.0-alpha5
onwards.
relates elastic/x-pack-elasticsearch#1123
Original commit: elastic/x-pack-elasticsearch@83db2cecf9
Persistent tasks should verify that completion notification is done for correct version of the task, otherwise a delayed notification from an old node can accidentally close a newly reassigned task.
Original commit: elastic/x-pack-elasticsearch@478bb6e730
* Adds a check to wait for active tasks for XPackRestIT
* uses test logger
* Change to use assertBusy instead of awaitBusy
* fixes failures with active tasks remaining
* Moves wait for pending tasks into MlRestTestStateCleaner
* remove unecessary log line
Original commit: elastic/x-pack-elasticsearch@1f098dbb64
By creating the watches via the exporter, we get to afford ourselves
with a much more automatic and simpler set of security permissions.
This does limit us in a few ways (e.g., every exporter has to deal with
cluster alerts itself, which means that newer releases of Kibana cannot
help by adding newer cluster alerts for older, still-monitored
clusters).
Original commit: elastic/x-pack-elasticsearch@448ef313c3
When Logstash 5.2 - 5.3 submit documents via the `_xpack/monitoring/_bulk`
endpoint, it sends its time-based documents with an explicit `_id` of
`""`.
This used to be automatically ignored by Monitoring, but we now accept the
_id that we are given (including `null`). ES, prior to 5.3.1, accepted
`""` as a valid `_id` through the `_bulk` endpoint, which means that it
blindly accepted and overwrote documents given that ID, meaning that all
Logstash instances "shared" the exact same document and therefore the UI
becomes useless.
This change allows `""` to be used and it simply replaces that value, and
only that value, with `null`. This enables backwards compatibility with LS
5.2 - 5.3.0.
Original commit: elastic/x-pack-elasticsearch@889578e61e
PersistentTasksCustomMetadata was using a generic param named `Params`. This conflicted with the imported interface `ToXContent.Params`. The java compiler was preferring the generic param over the interface so everything was fine but Eclipse apparently prefers the interface int his case which was screwing up the Hierarchy and causing compile errors in Eclipse. This changes fixes it by renaming the Generic param to `P`
Original commit: elastic/x-pack-elasticsearch@8528870684
- Mark all security indices (that is all indices managed by SecurityLifecycleService) as "superuser only" (only superuser role can have direct permissions)
- Add unit tests for IndexLifecycleManager
Original commit: elastic/x-pack-elasticsearch@e4478825e0
This commit removes the SecuredString class that was previously used throughout the security code
and replaces it with the SecureString class from core that was added as part of the new secure
settings infrastructure.
relates elastic/x-pack-elasticsearch#421
Original commit: elastic/x-pack-elasticsearch@e9cd117ca1
When a index name pattern contains both date math and wildcards, the name resolution does not
return the expected result. This change moves the date math resolution to before our attempts to
match wildcards so that both can be used in the same pattern.
relates elastic/x-pack-elasticsearch#1065
Original commit: elastic/x-pack-elasticsearch@9f48b42fad
let close job and stop datafeed apis redirect to elected master node.
This is for cluster state observation purposes, so that a subsequent open and then close job or
start and then stop datafeed see the same local cluster state and sanity validation doesn't fail.
Original commit: elastic/x-pack-elasticsearch@21a63184b9
introduced separate task names to register the persistent tasks executors and params.
Also renamed start and stop datafeed action names to be singular in order to be consistent with open and close action names.
Original commit: elastic/x-pack-elasticsearch@21f7b242cf
Support for default settings has been removed in core and so some
methods were refactored. This commit responds to this change in core.
Original commit: elastic/x-pack-elasticsearch@b22c612de4
The path has changed so it’s no longer possible to distinguish between data feed and job tasks.
The preceding test get_datafeed provides ample coverage anyway.
Original commit: elastic/x-pack-elasticsearch@780b1beb6b
When the execute watch API is called without recording the execution
in the watch history, the watch status is not updated, in order to not
divert the in-memory object status and the one persisted on disk.
In order to work around this issue, the execute watch API can simply
clone a new watch status and a new watch, which means the object in
the watch store is never updated. This allows for execution and changing
of the watch status, before it is returned to the client.
relates elastic/x-pack-elasticsearch#889
Original commit: elastic/x-pack-elasticsearch@6a0d9c9a78
Changes persistent task serialization and forces params and status to have the same writeable name as the task itself.
Original commit: elastic/x-pack-elasticsearch@59cf3dca39
remove `node.attr.max_running_jobs` node attribute and use `node.attr.ml.enabled` node attribute instead to know whether a node is a ml node or not.
Also renamed `max_running_jobs` setting to `xpack.ml.max_running_jobs`.
Original commit: elastic/x-pack-elasticsearch@798732886b
Removes the last pieces of ActionRequest from PersistentTaskRequest and renames it into PersistTaskParams, which is now just an interface that extends NamedWriteable and ToXContent.
Original commit: elastic/x-pack-elasticsearch@5a298b924f
Following this change, if the user runs on a platform that we don't ship
ML binaries for:
* If ML is enabled the node still refuses to start, but clearly says why
* If ML is disabled the node starts up without logging any errors
Original commit: elastic/x-pack-elasticsearch@af4fb8c411
Now that task id are strings instead of longs (elastic/x-pack-elasticsearch#1035), ml can use the job and datafeed as task id.
This removes logic that would otherwise iterate over all tasks and check if the task's request id was equal to the provided id and instead just do lookup in the task map.
Job and datafeed task ids are prefixed with either 'job-' or 'datafeed-', because job and datafeed ids don't have to be unique as they are stored separately from each other.
Original commit: elastic/x-pack-elasticsearch@b48c2b368a
This built-in watcher_admin role is able to execute all watcher actions,
read the watch history indices and read the watches index
index. The watcher_user role allows to GET a watch and to get the stats and thats it.
relates elastic/x-pack-elasticsearch#978
Original commit: elastic/x-pack-elasticsearch@11b33a413b
- stops the datafeed when post/flush throw a conflict exception.
A conflict exception signifies the job state is not opened, thus
we are better off stopping the datafeed.
- handles flushing the job the same way as posting to the job.
relates elastic/x-pack-elasticsearch#855
Original commit: elastic/x-pack-elasticsearch@49a54912c2
Makes the log more readable in editors not set to UTF-8.
Customers may well be in this situation on Linux/Windows.
Original commit: elastic/x-pack-elasticsearch@4e59fc90cf
The commit changes how LocalExporterTests stops: it now uses the
node_stats document collected on each node and check if it's older
than a given number of seconds (10). It also removes log traces.
Original commit: elastic/x-pack-elasticsearch@0384690b41
Before this change the persistent task operations related to opening
and closing jobs would time out a long time before the operations
related to native processes.
Original commit: elastic/x-pack-elasticsearch@23076b773b
Changes the logging of LDAP authentication failures from "always" to "only if the user failed to be authenticated"
Previously there were cases (such has having 2 AD realms) where successful user authentication would still cause an INFO message to be written to the log for every request.
Now that message is suppressed, but a WARN message is added _if-and-only-if_ the user cannot be authenticated by any realm.
This is implemented via a new value stored in the ThreadContext that the AuthenticationService choses to log (or not log) depending on the result of the authenticate process.
Closes: elastic/x-pack-elasticsearch#887
Original commit: elastic/x-pack-elasticsearch@b81b363729
The PR detects if SMILE is being provided, then correctly slices the stream such that each document is parsed individually. This is required because jackson's SMILE parser is stricter than it's JSON parser and will stop parsing when it hits a streamSeparator (unlike JSON, which will eagerly try to find more objects to parse).
Removes the forced-headers from the various REST tests.
relates elastic/x-pack-elasticsearch#642
Original commit: elastic/x-pack-elasticsearch@c0e97cd545
Instead of having a separate listener for indicating that the current task is finished, this commit is switching to use allocated object itself.
Original commit: elastic/x-pack-elasticsearch@7ad5362121
`PersistentTasksExecutor#getAssignment(...)` should be a cheap and side-effect free method,
but in case of `OpenJobPersistentTasksExecutor` and `StartDatafeedPersistentTasksExecutor` before this change it would index a document each time `getAssignment(...)` was invoked
Original commit: elastic/x-pack-elasticsearch@5ca5890baf
The change applies chunking by default on aggregated datafeeds.
The chunking is set to a manual mode with time_span being
1000 histogram buckets.
The motivation for the change is two-fold:
1. It helps to avoid memory pressure/blowing.
Users may perform a lookback on a very long period of time. In that
case, we may hold a search response for all that time which could
include too many buckets. By chunking, we avoid that situation
as we know we'll only keep results for 1000 buckets at a time.
2. It makes cancellation more responsive.
In elastic/x-pack-elasticsearch#862 we made the processing of a search response cancellable in a
responsive manner. However, the search phase cannot be cancelled at
the moment. Chunking makes the search phase shorter, which will
result to a better user experience when they stop an aggregated
datafeed.
Also note the change sets the default chunking_config on datafeed
creation so the setting is no longer hidden.
Relates to elastic/x-pack-elasticsearch#803
Original commit: elastic/x-pack-elasticsearch@ae8f120f5f
When a datafeed task is created but it cannot be assigned the task
has a null status. This means _stats report it as stopped, however
deleting it fails. In addition, it's a better experience to error
the start datafeed request all together and give the user the chance
to fix his data indices.
This change fails a datafeed-start if it cannot be assigned.
relates elastic/x-pack-elasticsearch#1018
Original commit: elastic/x-pack-elasticsearch@532288fda0
Retries should be already handled by TransportMasterNodeAction, there is no need to introduce another retry layer in Persistent Tasks code.
Original commit: elastic/x-pack-elasticsearch@967ac7f7fa
This commit changes how LocalExporterTests stops the monitoring
components: it first stops the monitoring service (but keeps the
local exporter enabled), deletes and checks if monitoring indices
are recreated, and then disables the local exporter.
Original commit: elastic/x-pack-elasticsearch@4c4809a660
Closing a job may take a while. In the meantime it is possible to start a datafeed, because before this change the job state remained OPENED.
With this change when the executor node receives the close job request, it will first set the status to CLOSING and after that closes the job (closing autodetect process, etc.).
relates elastic/x-pack-elasticsearch#990
Original commit: elastic/x-pack-elasticsearch@d8d89c0756
This commit removes the smoke-test-monitoring-with-security project
and replaces it with a REST test.
Original commit: elastic/x-pack-elasticsearch@f1665815c2
The execution has diverged too much from post data, flush and update process apis, since the close all jobs have been added.
The logic is now easier to understand as it exist in a single source file instead of in both CloseJobAction and TransportJobTaskAction.
Original commit: elastic/x-pack-elasticsearch@daf5fabad5
Users currently have difficulty diagnosing authentication failures.
Some logging messages mislead them, and in other cases there are unexpected behaviours that are not logged at all.
These additional DEBUG log messages and change some existing messages in an attempt to alleviate that problem.
Original commit: elastic/x-pack-elasticsearch@c6ea98b038
Increase the timeout to give enough time for a datafeed to
stop smoothly.
This is the second step to avoid hitting the default timeout.
The first was ensuring aggregated datafeed is cancellable in
a responsive manner. The third and final step will be to
apply chunking in aggregated datafeeds in order to shorten
the duration of the search, which will make cancellation even
more responsive.
Relates elastic/x-pack-elasticsearch#803
Original commit: elastic/x-pack-elasticsearch@db642330ec
This commit restores the ability to build x-pack-elasticsearch without issues when running without
access to the internet. When the `--offline` flag is used, we will not try to contact vault and the
aws apis to retrieve the ml-cpp binaries but instead gradle will use a cached version even though
it may be expired.
relates elastic/x-pack-elasticsearch#726
Original commit: elastic/x-pack-elasticsearch@b0915d8fa9
This is analagous of the bwc-zip for elasticsearch. The one caveat is
due to the structure of how ES+xpack must be checked out, we end up with
a third clone of elasticsearch (the second being in :distribution:bwc-zip).
But the rolling upgrade integ test passes with this change.
relates elastic/x-pack-elasticsearch#870
Original commit: elastic/x-pack-elasticsearch@34bdce6e99
This commit renames and moves the forked delete by query classes from being ml specific to being a
xpack common class since an upcoming security feature plans to make use of this. Additionally, this
commit fixes a issue where the dbq action was being executed by the calling user instead of the
xpack user for certain requests. This was found when adding a authorization change that restricts
this action's execution to the xpack user only.
Original commit: elastic/x-pack-elasticsearch@d5967e7255
There was a problem with the way CompositeBytesReference was used in the
StateProcessor. In the case of a large state document we ended up with a
deeply nested CompositeBytesReference that then caused a deep stack and N^2
processing in the bulk action processor.
This change uses an intermediate list of byte arrays that get combined into
a single CompositeBytesReference to avoid the deep nesting.
Additionally, errors in state processing now bubble up to close the state
stream, which will cause the C++ process to stop trying to persist more state.
Finally, the results processor also times out after a similar period (30 minutes)
to that used by the state processor.
Original commit: elastic/x-pack-elasticsearch@ceb31481d1
Rather than using an async call, this leverages
the Assignment logic while selecting nodes.
Now with 300% more tests!
Original commit: elastic/x-pack-elasticsearch@300d628f72
It has been observed that Amazon EBS volumes created from snapshots can
have very high latency the first time a given block is accessed. This
can lead to named pipes taking longer than 2 seconds to create.
Since the native processes create their named pipes immediately after
startup, and this only takes a fraction of a second on a local disk, 2
seconds was considered a generous timeout, but it seems that in the case
of a remote NAS with lazy provisioning it's not long enough. During
debugging a latency of just over 3 seconds was observed. The timeouts
have been increased to 10 seconds.
relates elastic/x-pack-elasticsearch#922
Original commit: elastic/x-pack-elasticsearch@c90434c948
Moves the direct management of the security index from SecurityLifecycleService to IndexLifecycleManager, so that the SecurityLifecycleService can take responsibility for several indices.
Multiple security indices are required as we move away from storing multiple types in a single index.
Original commit: elastic/x-pack-elasticsearch@fde3a42b4d
The IndexAuditTrailMutedTests have a threadpool but fail to set it on the test client, which causes
a NPE and tests to fail.
Original commit: elastic/x-pack-elasticsearch@d34a4ce080
As the snapshot that is loaded is an important operational
aspect of a job, this change adds a notification that displays
the loaded snapshot with its latest_record_timestamp and the
job's latest_record_timestamp. Having both allows us to discover
when a job is recovering after a node failure.
relates elastic/x-pack-elasticsearch#872
Original commit: elastic/x-pack-elasticsearch@c2dee495a2
The test fails on slow machines because of inflight bulk requests
that hit one node while the others are stopping. This commit adds
more time (10s), equivalent to 2 to 3 collection interval, to delete
the monitoring indices. It also add TRACE logging level for the test.
Original commit: elastic/x-pack-elasticsearch@b433937946
* [ML] Adds jobType to Job
This change adds `jobType` field to teh `Job` class so that when the job is written to the index a `job_type` field is written int he document. This will help separate this type of job from other new job types in the future so migrating the index to allow those new type of jobs will be easer
relates elastic/x-pack-elasticsearch#798
* Addresses review comments
Original commit: elastic/x-pack-elasticsearch@d9fd11edb3
When the LDAP SDK returns a SearchResult that has a non-success ResultCode, convert it to an exception and call onFailure
A configuration setting controls whether failures in referrals should be fatal (defaults to ignoring errors)
Closes: elastic/x-pack-elasticsearch#717
Original commit: elastic/x-pack-elasticsearch@4159758c2a
PersistentTasksService methods are not using ActionListener<PersistentTask<?>> instead of PersistentTaskOperationListener.
Original commit: elastic/x-pack-elasticsearch@f95d8bda3d
The `FieldPermissions` class incorrectly assumed that the `granted` and `denied` arrays were
sorted, so it could do a `binarySearch` to see if `_all` was in the arrays.
Original commit: elastic/x-pack-elasticsearch@49b5875602
This is a follow-on to elastic/x-pack-elasticsearch#939, which removes the use of Arrays.binarySearch in the FieldPermissions
class. This change removes other incorrect uses in the rest of the x-pack code and replaces them
with a stream based implementation.
Original commit: elastic/x-pack-elasticsearch@ccca7e9bad
Before this change, aggregation datafeeds used the histogram bucket
key as the record timestamp that is posted to the job. That meant
that the latest_record_timestamp at the end of a datafeed run was
the start of the latest seen histogram bucket. Upon continuing the
datafeed, the search starts from one millisecond after the
latest_record_timestamp. Hence, data may be fetched for a second time.
This change requires a max aggregation on the time_field nested in
the histogram bucket. It then reads the timestamp from that agg.
This ensures datafeed can restart without duplicating data.
relates elastic/x-pack-elasticsearch#874
Original commit: elastic/x-pack-elasticsearch@f820efa866
- include 'real-time' instead of now as the end time for real-time
datafeeds
- do not notify lookback is completed when datafeed was stopped
- do not notify datafeed switch to real-time when datafeed was stopped
Relates elastic/x-pack-elasticsearch#878
Original commit: elastic/x-pack-elasticsearch@aa22f9b86f
This commit is response to the renaming of the random ASCII helper
methods in ESTestCase. The name of this method was changed because these
methods only produce random strings generated from [a-zA-Z], not from
all ASCII characters.
Relates elastic/x-pack-elasticsearch#942
Original commit: elastic/x-pack-elasticsearch@a6085964d3
This commit reenables the Monitoring Bulk Api REST tests. The XPackRestIT
now enables/disables the local default exporter before executing the monitoring
tests, and also waits for the monitoring service to be started before executing
the test.
Original commit: elastic/x-pack-elasticsearch@10b696198c
State processing can take a lot longer than log processing, even after
the C++ process has closed its end of the pipe. The pipe has a buffer,
and indexing the state document(s) in that buffer can take more than a
second.
relates elastic/x-pack-elasticsearch#945
Original commit: elastic/x-pack-elasticsearch@65f5075028
* [ML] Set job create time on server
* Job.Builder serialisation tests
* Make setCreateTime package private
Original commit: elastic/x-pack-elasticsearch@d2d75e0d7b
Adds following validations:
- aggregations must contain date_histogram or histogram at the top level
- a date_histogram has to have its time_zone to UTC (or unset which
defaults to UTC)
- a date_histogram supports calendar intervals only up to 1 week
to avoid the length variability of longer intervals
- aggregation interval must be greater than zero
- aggregation interval must be less than or equal to the bucket_span
Original commit: elastic/x-pack-elasticsearch@404496a886
* Remove JobManagers dependency on JobResultsPerister
* Remove unneeded call to refresh the state index
Original commit: elastic/x-pack-elasticsearch@0b2351bba7
If jobs are being deleted then the operations required to get stats
could fail with unexpected exceptions. When stats for multiple jobs
were being requested, this would previously cause the whole operation
to fail.
This commit changes the stats endpoint to ignore jobs that are being
deleted.
Fixeselastic/prelert-legacy#837
Original commit: elastic/x-pack-elasticsearch@6ac141a987
When this happens it means the job has been deleted, which in turn means
the C++ process has been stopped, so there's no need to send it a message
and hence no problem worth logging a stack trace for.
This differs from elastic/x-pack-elasticsearch#896 because elastic/x-pack-elasticsearch#896 was for a similar situation with
closed jobs, whereas this one is for deleted jobs.
Original commit: elastic/x-pack-elasticsearch@9bb4e98fe7
The test is too rigid on checking the right number of node_stats documented that are collected. It happens if a node takes time to start, the node_stats count % numNodes will always be different than 0.
It also adds more logging for LocalBulk failures.
Original commit: elastic/x-pack-elasticsearch@1ebb20b6f6
In order to prevent tasks state updates by stale executors, this commit adds a check for correct allocation id during status update operation.
Original commit: elastic/x-pack-elasticsearch@b94eb0e863
This commit changes the LocalExporterTests so that it now test
various randomized cases in a single test. This should speed up
the test as well as minimize the failures due to multiple start
/stop of the exporter. It also uses the MonitoringBulk API
instead of calling the Exporter instances, which makes more sense
since it is the normal way to index monitoring documents.
Related elastic/x-pack-elasticsearch#416
Original commit: elastic/x-pack-elasticsearch@f8a4af15cd
Detector configs are validated both by our C++ and by our Java code.
If the C++ is stricter than the Java then error reporting is poor.
This commit adds two extra validation checks to the Java code that
were already present in the C++ validation.
relates elastic/x-pack-elasticsearch#856
Original commit: elastic/x-pack-elasticsearch@bd4ce2377c
It's possible for a C++ process to exit between the time when a
config update message for it is queued and the time that message
is processed. This commit ensures we don't spam the log with a
stack trace in this situation, as it's not a problem at all.
relates elastic/x-pack-elasticsearch#891
Original commit: elastic/x-pack-elasticsearch@81af8eaf70
Aggregated data extraction is done in 2 phases:
1. search
2. process response
The first phase cannot be currently cancelled. However, it usually
is the fastest of the two.
The second phase processes the histogram buckets in the search
response into flat JSON and then posts the result stream to the job.
This phase can be split into batches where a few buckets are posted
to the job at a time. Cancelling can then work between batches.
This commit changes the AggregationDataExtractor to process the
search response in batches. The definition of a batch is crucial
as it has to be short enough to allow for responsive cancelling,
yet long enough to minimise overhead due to multiple calls to the
post data action. The number of key-value pairs written by the
processor is a good candidate for a batch size measure. By testing,
1000 seems to be an effective number.
relates elastic/x-pack-elasticsearch#802
Original commit: elastic/x-pack-elasticsearch@ce3a172411
The native process can only handle one operation at a time, so in order the protect against multiple operation at a time (e.g. post data and flush or multiple post data operations) there should be protection in place to guarantee that at most only a single thread interacts with the native process. The current protection is broken when a job close is executed, more specifically the wait logic is broken here.
This commit changes the threading logic when interacting with the native process by using a custom `ExecutorService` that that uses a single worker thread from `ml_autodetect_process` thread pool to interact with the native process. Requests from the ml apis are initially being queued and this worker thread executes these requests one by one in the order they were specified.
Removed the general `ml` threadpool and replaced its usages with `ml_autodetect_process` or `management` threadpool.
Added a new threadpool just for (re)normalizer, so that these operations are isolated from other operations.
relates elastic/x-pack-elasticsearch#582
Original commit: elastic/x-pack-elasticsearch@ff0c8dce0b
The slack tests seem to fail periodically with not output
This commit tries to add some more verbose output by
making the query more broad and take failures into account
to uncover, what happens in this test.
Relates elastic/x-pack-elasticsearch#836
Original commit: elastic/x-pack-elasticsearch@e601b3a0df
This change adds a retain field to model snapshots.
A user can set retain to true/false via the update model snapshot API.
Model snapshots with retain set to true will not be deleted by
the daily maintenance service regardless of whether they expired.
This allows users to keep always keep certain snapshots around for
potentially reverting to in the future.
relates elastic/x-pack-elasticsearch#758
Original commit: elastic/x-pack-elasticsearch@2283989a33
* Removed OPENING and CLOSING job states. Instead when persistent task has been created and
status hasn't been set then this means we haven't yet started, when the executor changes it to STARTED we have.
The coordinating node will monitor cs for a period of time until that happens and then returns or times out.
* Refactored job close api to go to node running job task and close job there.
* Changed unexpected job and datafeed exception messages to not mention the state and instead mention that job/datafeed haven't yet started/stopped.
Original commit: elastic/x-pack-elasticsearch@37e778b585
Add the `.monitoring-alerts-2` index template via the exporter. This
avoids a very common problem where the user wipes out their monitoring
indices manually, which means that the watches would then create an index
with a dynamic mappings.
This adds a mechanism for posting a template that is not associated with a
Resolver (convenient for the forthcoming work _and_ for a future Logstash
index).
Original commit: elastic/x-pack-elasticsearch@a4cfc48191
The stopped and removeOnCompletion flags are not currently used, this commit removes them for now to simplify things.
Original commit: elastic/x-pack-elasticsearch@c636c2817e
Previously a `kill -9` on the `autodetect` process associated with a
job would leave the job in the OPENED state.
Now if the C++ process dies before a request to close the job is made
then the job state is set to FAILED.
For this purpose C++ process death is defined as end-of-file on the
log stream. (Technically it would be possible to get end-of-file on
the log stream while the C++ process was still running, but this
would also represent an unexpected and undesirable situation.)
Original commit: elastic/x-pack-elasticsearch@2b74c56a79
This to avoid to lose data counts when the job gets restarted on another node.
The job stats api returns live data counts, which may not have been persisted to an index,
so getting the data counts via search api will give us a better guarantee that when
the job gets restarted the datacounts are there too. During job restart a get call is being
done to get data counts in the order to initialize the job.
Original commit: elastic/x-pack-elasticsearch@901952da85
Previously the GET/PUT/DELETE filters actions were master node actions. This is not necessary since the filters are stored in an index rather than the cluster state. This change makes the actions extend `HandledTransportAction` so they can be run on any node.
The change also makes PutFilterAction.TransportAction use the TransportBulkAction instead of the deprecated TransportIndexAction.
relates elastic/x-pack-elasticsearch#756
Original commit: elastic/x-pack-elasticsearch@c6df04382e
This commit makes the MonitoringDoc immutable and removes the type() and id() methods from "resolvers" so that they are not anymore in charge of computing the documents types and ids. Now each MonitoringDoc knows its type and is able to compute its own id if needed.
Original commit: elastic/x-pack-elasticsearch@5161cedcc8
This commit marks the x-pack plugin as having a native controller. This
is now a requirement in core for any plugin that forks a native process
to display a warning to the user when they install the plugin.
Relates elastic/x-pack-elasticsearch#839
Original commit: elastic/x-pack-elasticsearch@3529250023
Refactors NodePersistentTask and RunningPersistentTask into a single AllocatedPersistentTask. Makes it possible to update Persistent Task Status via AllocatedPersistentTask.
Original commit: elastic/x-pack-elasticsearch@8f59d7b819
All of our code supports configuring email addresses in the
email action not only via a JSON array, but also via a comma
separated value (we also have tests for this). However in one bit
we did not support this, where an email template is rendered to
a concrete email.
This commit fixes the last piece, so that users will be able to
specify comma separated email adresses.
The main use case for this is having an array of email addresses,
that can be joined in mustache with a comma in order to send to
several recipients.
Original commit: elastic/x-pack-elasticsearch@19794ba612
This commit ensures that upon reopening a job, the in-memory
model size stats are correctly initialized from the ones
last persisted in the results index.
This fixes the bug that could be seen upon opening a job
that has processed data and immediately calling its _stats
API only to see the model size stats are zero.
In addition, this PR refactors getting the parameters needed to
open an autodetect job:
- Previously, there was a method chaining together multiple
callbacks to the job provider.
- These methods were retrieving data via GETs which is not
going to work with index rollover.
Note, this PR is not eliminating all GETs. More work is needed
to fully support index rollover.
relates elastic/x-pack-elasticsearch#801
Original commit: elastic/x-pack-elasticsearch@1ef1d44b32
The xcontent parser was only set to read all data to a map
which did not work, when the returned data was in form of an
array (for example the cat API is doing this, if the response
format is set to JSON).
relates elastic/x-pack-elasticsearch#351
Original commit: elastic/x-pack-elasticsearch@08ad457bf6
If forced, the internal RemovePersistentTasks API is invoked instead of going through
ML. This will remove the task, which should trigger the task framework to do
necessary cleanup.
At that point, the Delete* APIs interpret a missing task as CLOSED/STOPPED,
so they can be removed regardless of the original state.
Original commit: elastic/x-pack-elasticsearch@bff23c7840
* [ML] Support all XContent types in Data API
This changes the POST Data API so that it accepts all XContent types instead of just JSON.
For now the datafeed is restricted to only sending JSON to the POST data API.
* Rename SimpleJsonRecordReader to XContentRecordReader
Also renames `DataFormat.JSON` to `DataFormat.XCONTENT`
* fixes YAML tests
Original commit: elastic/x-pack-elasticsearch@5fd20690b8