The distribution of watches now happens on the node which holds the
watches index, instead of on the master node. This requires several
changes to the current implementation.
1. Running on shards and replicas
In order to run watches on the nodes with the watches index on its
primaries and replicas. To ensure that watches do not run twice, there is
a logic which checks the local shards, runs a murmurhash on the id and
runs modulo against the number of shards and replicas, this is the way to
find out, if a watch should run local. Reloading happens
2. Several master node actions moved to a HandledTransportAction, as they
are basically just aliases for indexing actions, among them the
put/delete/get watch actions, the acknowledgement action, the de/activate
actions
3. Stats action moved to a broadcast node action, because we potentially
have to query every node to get watcher statistics
4. Starting/Stopping watcher now is a master node action, which updates
the cluster state and then listeners acts on those. Because of this watches
can be running on two systems, if you those have different cluster state
versions, until the new watcher state is propagated
5. Watcher is started on all nodes now. With the exception of the ticker
schedule engine most classes do not need a lot of resources while running.
However they have to run, because of the execute watch API, which can hit
any node - it does not make sense to find the right shard for this watch
and only then execute (as this also has to work with a watch, that has not
been stored before)
6. By using a indexing operation listener, each storing of a watch now
parses the watch first and only stores on successful parsing
7. Execute watch API now uses the watcher threadpool for execution
8. Getting the number of watches for the stats now simply queries the
different execution engines, how many watches are scheduled, so this is
not doing a search anymore
There will be follow up commits on this one, mainly to ensure BWC compatibility.
Original commit: elastic/x-pack-elasticsearch@0adb46e658
Within the same JVM, setting the number of processors available to Netty
can only be done once. However, tests randomize the number of processors
and so without intervention would attempt to set this value multiple
times. Therefore, we need to use a flag that prevents setting this value
in tests.
Relates elastic/x-pack-elasticsearch#1266
Original commit: elastic/x-pack-elasticsearch@d127149725
We are back to having a 140-column limit. While at some point we will
apply auto-formatting tools to the code base, that is a down-the-road
thing and adjusting the suppressions now shaves minutes off the build
time.
Relates elastic/x-pack-elasticsearch#1260
Original commit: elastic/x-pack-elasticsearch@9a96aec9e6
Many of the tests assume that the trial license has already been generated before the test gets to run. As this is asynchronously triggered upon node
startup, however, there is no guarantee that trial license generation has completed before the tests get to execute, leading to null values when
checking clusterService.state().metaData().custom(LicensesMetaData.TYPE).
Original commit: elastic/x-pack-elasticsearch@d909c9ba95
LicenseManagerServiceTests sometimes fails in jenkins, but fairly
rarely. We don't get useful logs when it does. This cranks up
the log level and adds some more assertions so we can better track
down where the failure comes from.
Relates to elastic/x-pack-elasticsearch#222
Original commit: elastic/x-pack-elasticsearch@3e08725fc7
* [DOCS] Add ML limitations
* [DOCS] Address feedback about ML limitations
* [DOCS] Change ML limitations capitalization
Original commit: elastic/x-pack-elasticsearch@41682d8d93
When a user creates a datafeed, as well as checking they have permission
to create a datafeed we also check that they have permission to search the
indices they've configured the datafeed to search.
Previously this second check was erroneously done for the user who issued
the put_datafeed request, whereas it should be done as the runas user for
that request.
Original commit: elastic/x-pack-elasticsearch@4c35204c66
When we revert to snapshot, if we delete intervening results
we should delete with an open end on the time range for the
case when future data has been posted to the job.
Original commit: elastic/x-pack-elasticsearch@c3f5e8f19e
The one argument ctor for `Script` creates a script with the
default language but most usages of are for testing and either
don't care about the language or are for use with
`MockScriptEngine`. This replaces most usages of the one argument
ctor on `Script` with calls to `ESTestCase#mockScript` to make
it clear that the tests don't need the default scripting language.
Original commit: elastic/x-pack-elasticsearch@c1d05b7357
When the active directory realm was refactored to add support for authenticating against multiple
domains, only the default authenticator respected the user_search.filter setting. This commit moves
this down to the base authenticator and also changes the UPN filter to not include sAMAccountName
in the filter.
Original commit: elastic/x-pack-elasticsearch@d2c19c9bee
- Removes need to handle exception from action methods
- Clearly renames DatafeedJobIT to DatafeedJobsRestIT to distinguish
from DatafeedJobsIT
- Refactors DatafeedJobsIT to reuse MlNativeAutodetectIntegTestCase
Original commit: elastic/x-pack-elasticsearch@5bd0c01391
Cross cluster search uses ClusterSearchShardsAction under the covers.
Without this change, you would need both "read_cross_cluster" and "view_index_metadata" privilegs in order to have permission to execute searches from a remote cluster.
Original commit: elastic/x-pack-elasticsearch@65a6aff329
If a single permission set does not have a query defined then this should be considered as the user
not having document level security for the indices matching that pattern. However, the lack of
document level security was not being taken into account and document level security was being
applied when it should not have been.
Original commit: elastic/x-pack-elasticsearch@f5777c2f37