The distribution of watches now happens on the node which holds the
watches index, instead of on the master node. This requires several
changes to the current implementation.
1. Running on shards and replicas
In order to run watches on the nodes with the watches index on its
primaries and replicas. To ensure that watches do not run twice, there is
a logic which checks the local shards, runs a murmurhash on the id and
runs modulo against the number of shards and replicas, this is the way to
find out, if a watch should run local. Reloading happens
2. Several master node actions moved to a HandledTransportAction, as they
are basically just aliases for indexing actions, among them the
put/delete/get watch actions, the acknowledgement action, the de/activate
actions
3. Stats action moved to a broadcast node action, because we potentially
have to query every node to get watcher statistics
4. Starting/Stopping watcher now is a master node action, which updates
the cluster state and then listeners acts on those. Because of this watches
can be running on two systems, if you those have different cluster state
versions, until the new watcher state is propagated
5. Watcher is started on all nodes now. With the exception of the ticker
schedule engine most classes do not need a lot of resources while running.
However they have to run, because of the execute watch API, which can hit
any node - it does not make sense to find the right shard for this watch
and only then execute (as this also has to work with a watch, that has not
been stored before)
6. By using a indexing operation listener, each storing of a watch now
parses the watch first and only stores on successful parsing
7. Execute watch API now uses the watcher threadpool for execution
8. Getting the number of watches for the stats now simply queries the
different execution engines, how many watches are scheduled, so this is
not doing a search anymore
There will be follow up commits on this one, mainly to ensure BWC compatibility.
Original commit: elastic/x-pack-elasticsearch@0adb46e658
This is analagous of the bwc-zip for elasticsearch. The one caveat is
due to the structure of how ES+xpack must be checked out, we end up with
a third clone of elasticsearch (the second being in :distribution:bwc-zip).
But the rolling upgrade integ test passes with this change.
relates elastic/x-pack-elasticsearch#870
Original commit: elastic/x-pack-elasticsearch@34bdce6e99
The wait condition used for integ tests by default calls the cluster
health api with wait_for_nodes nd wait_for_status. However, xpack
overrides the wait condition to add auth, but most of these conditions
still looked at the root ES url, which means the tests are susceptible
to race conditions with the check and node startup. This change modifies
the url for the authenticated wait condtion to check the health api,
with the appropriate wait_for_nodes and wait_for_status.
Original commit: elastic/x-pack-elasticsearch@0b23ef528f
Currently, both the NativeUsersStore and NativeRolesStore can undergo
multiple state transitions. This is done primarily to check if the
security index is usable before it proceeds. However, such checks are
only needed for the tests, because if the security index is unavailable
when it is needed, the downstream actions invoked by the
NativeUsersStore and NativeRolesStore will throw the appropriate
exceptions notifying of that condition. In addition, both the
NativeUsersStore and NativeRolesStore had much duplicate code that
listened for cluster state changes and made the exact same state
transitions.
This commit removes the complicated state transitions in both classes
and enables both classes to use the SecurityTemplateService to monitor
all of the security index lifecycle changes they need to be aware of.
This commit also moves the logic for determining if the security index
needs template and/or mapping updates to the SecurityLifecycleService,
and makes the NativeRealmMigrator solely responsible for applying the
updates.
Original commit: elastic/x-pack-elasticsearch@b31d144597
Adds a new `xpack.security.authc.accept_default_password` setting that defaults to `true`. If it is set to false, then the default password is not accepted in the reserved realm.
Adds a bootstrap check that the above setting must be set to `false` if security is enabled.
Adds docs for the new setting and bootstrap.
Changed `/_enable` and `/_disable`, to store a blank password if the user record did not previously exist, which is interpreted to mean "treat this user as having the default password". The previous functionality would explicitly set the user's password to `changeme`, which would then prevent the new configuration setting from doing its job.
For any existing reserved users that had their password set to `changeme`, migrates them to the blank password (per above paragraph)
Closes: elastic/elasticsearch#4333
Original commit: elastic/x-pack-elasticsearch@db64564093
This commit brings back support an auto-generated certificate and private key for
transport traffic. The auto-generated certificate and key can only be used in development
mode; when moving to production a key and certificate must be provided.
For the edge case of a user not wanting to encrypt their traffic, the user can set
the cipher_suites setting to `TLS_RSA_WITH_NULL_SHA256` or a like cipher, but a key/cert
is still required.
Closeselastic/elasticsearch#4332
Original commit: elastic/x-pack-elasticsearch@b7a1e629f5
* Build: Convert xplugins to use new extra projects setup
This change makes the gradle initialization for xplugins look in the
correct location for elasticsearch, which is now as a sibling of an
elasticsearch-extra directory, with x-plugins as a child of the extra
directory.
The elasticsearch side of this change is
elastic/elasticsearchelastic/elasticsearch#21773. This change will enable renaming x-plugins
to x-pack, see elastic/elasticsearch#3643.
Original commit: elastic/x-pack-elasticsearch@09398aea5a
This commit adds basic tests that store a user and a role using the native API. The test checks
that the user and role can be used prior to starting the upgrade. The realm and roles caches are
also cleared to ensure the next authentication will require a read from the security index; this
ensures we are actually testing reads from the index.
Original commit: elastic/x-pack-elasticsearch@396862da94
When one of the 2 nodes in the old cluster is shut down, shards that were on that node will become unassigned and be marked to be
delay-allocated, i.e. either a node with shard data for that shard must be available or the allocation of the shards will be delayed for a minute.
In the mixed cluster the replica shard might not be allocated as the primary is already on the node with the newer version and replicas are not allowed
then to be allocated to a node of an older version of ES. Once both nodes are upgraded, the delay might still be in place, and can only be nullified if there
is shard data available on the node. If there never was a shard on that node though, it will take a minute and run into the timeout checking for green.
This commit ensures that all shards are fully-allocated before we do the rolling restart scenario
Original commit: elastic/x-pack-elasticsearch@a0d9b1b043
If the primary shard of an index with (number_of_replicas > 0) ends up on a new node in a mixed cluster, the replica cannot be allocated to the old node as
the new node might have written segments that use a new postings format or codec that is not available on the older node.
As x-pack automatically creates indices with number_of_replicas > 0, for example monitoring-data-*, the test can only wait for yellow in a mixed cluster.
Original commit: elastic/x-pack-elasticsearch@945d9e3811
This commit adds a timeout to the cluster health call that we wait on so that we can
see the status of the health request instead of getting timeouts failures with no
information to go on.
Original commit: elastic/x-pack-elasticsearch@2f34d01e00
This change allows reads of our native users and roles when the template version has not been updated to
match the current version. This is useful for rolling upgrades where the nodes are also being actively
queried and/or indexed into. Without this, we can wreak havoc on a cluster by causing exceptions during
replication, which leads to shard failures. On nodes that match the version defined in the template,
write operations are allowed since we know that we are backwards compatible in terms of format but we
may have added new fields and shouldn't index them until the mappings and template have been updated.
As part of this, the rolling upgrade tests from core were used as the basis for a very basic set of tests
for doing a rolling upgrade with x-pack.
Closeselastic/elasticsearch#4126
Original commit: elastic/x-pack-elasticsearch@9be518ef00