When having a mixed cluster with 5.6 and 6.0 nodes, the template upgrade
service has a cluster state listener that deletes the old watches and
triggered_watches index template. However during that time the 5.6 node
WatcherIndexTemplateRegistry checks if the templates are missing and
adds them back. This results in a race, because the new .watches index
template does not get added by the WatcherIndexTemplateRegistry when the
6.0 node is node a master node.
This commit circumvents this issue, by only deleting the watches and
triggered watches template during the upgrade process.
Original commit: elastic/x-pack-elasticsearch@71380f460a
Following elastic/elasticsearch#23997 this was only working due to
the way we were suppressing certain errors during job deletion.
This PR makes the situations we want to ignore during job deletion
clearer and adheres to the intention of elastic/elasticsearch#23997
by only specifying concrete indices to the `indices` arguments of
deletion calls.
Original commit: elastic/x-pack-elasticsearch@2458c3db40
If the cluster is still making cluster state updates while disabling the license, cluster state updates might not go through,
triggering an assertion failure at the end of the test that checks if all cluster states have been applied.
relates elastic/x-pack-elasticsearch#1627
Original commit: elastic/x-pack-elasticsearch@e11863fd02
Adds a update procedure for ML index mappings in order to allow adding new fields. The ES version is stored in the "_meta" field of the ML mappings which then get applied to every index. When opening a job, this version is checked on the (shared) index and a mapping update is performed in case that the version is older than current.
Original commit: elastic/x-pack-elasticsearch@211608c7ad
* add index checks
The following checks apply to both 5.x as well as 6.0
index checks:
- coercion of boolean fields
- the _all meta field is now disabled by default
- the include_in_all mapping parameter is now disallowed
- unrecognized match_mapping_type options not silently ignored
- similarity settings
- store settings
- store throttling settings
- shadow replicas have been removed
Original commit: elastic/x-pack-elasticsearch@5583b21763
This is related to elastic/x-pack-elasticsearch#1217. This commit adds a ClusterStateListener at
node startup. Once the cluster and security index are ready, this
listener will attempt to set the elastic user's password with the
bootstrap password pulled from the keystore. If the password is not in
the keystore or the elastic password has already been set, nothing will
be done.
Original commit: elastic/x-pack-elasticsearch@7fc4943c45
* [DOCS] Update model_memory_limit
* [DOCS] Clarify minimum model_memory_limit value
* [DOCS] More updates to model_memory_limit
* [DOCS] Address feedback in jobresource.asciidoc
Original commit: elastic/x-pack-elasticsearch@3c62719037
As the TemplateUpgradeService requires permissions to add and
delete index template, we have to grant those to the _system user.
This commit adds such permissions plus an integration test.
Original commit: elastic/x-pack-elasticsearch@a76ca9c738
The monitoring watches are roughly executing the same queries even when
they run against different clusters. However the way they were created,
where the cluster name is replaced via search & replace instead of using
watch metadata implies, that every watch is different from a script
compilation cache perspective. On top of that every of those watches is
executed once a minute. So if a new node becomes master and you monitor
three clusters, this results in a fair share of compilations in the first minute. The
reason for the compilation is the fact, that the search input uses
mustache for being able to add dynamic parts into a search using
mustache.
Several of those watches also need to compile more than one search
request.
The maximum default value for script compilations is only 15 and thus at
least one watch will not be executed due to failing script compilations.
This commit changes the four watches, so that the search requests are
cacheable. This means, no matter how many clusters you monitor, there
will be only needed four compilations for the different watches and
that's it.
Relates elastic/support-dev-help#2090
Original commit: elastic/x-pack-elasticsearch@6c877421bb
This test creates watches in old versions of elasticsearch, upgrades them after upgrading cluster to the latest version and then tests that they were upgraded correctly.
Original commit: elastic/x-pack-elasticsearch@b9d45eb2a5
* [Monitoring] Email actions for Cluster Alerts
* fix quotations in email fields
* move email vars to transform, and rename for snake_case
* add state to email subject for cluster status alert
* remove types field in kibana_settings search
* simplify email action condition script
* uppercase the state for the email subject
* only append state to email subject if alert is new
* show state in email subject even when alert is resolved
Original commit: elastic/x-pack-elasticsearch@e6fdd8d620
- Document refresh interval for role mapping files
- Fix obsolete shield reference in transport profile example
- Clarify that AD & PKI don't support run_as
- Fix logstash conf examples
- Clarify interaction of SSL settings and PKI realm settings
- Document PKI DN format, and recommend use of pki_dn metadata
- Provide more details about action.auto_create_index during setup
Original commit: elastic/x-pack-elasticsearch@49ddb12a7e
This commit is related to elastic/x-pack-elasticsearch#1896. Currently setup mode means that the
password must be set post 6.0 for using x-pack. This interferes with
upgrade tests as setting the password fails without a properly
upgraded security index.
This commit loosens two aspects of the security.
1. The old default password will be accept in setup mode (requests
from localhost).
2. All request types can be submitted in setup mode.
Original commit: elastic/x-pack-elasticsearch@8a2a577038
This is step 2 of elastic/x-pack-elasticsearch#1604
This change stores `model_memory_limit` as a string with `mb` unit.
I considered using the `toString` method of `ByteSizeValue` but it
can lead to accuracy loss. Adding the fixed `mb` unit maintains
the accuracy, while making clear what unit the value is in.
Original commit: elastic/x-pack-elasticsearch@4dc48f0ce8
ML has two types of custom cluster state:
1. jobs
2. datafeeds
These need to be parsed from JSON in two situations:
1. Create/update of the job/datafeed
2. Restoring cluster state on startup
Previously we used exactly the same parser in both situations, but this
severely limits our ability to add new features. This is because the
parser was very strict. This was good when accepting create/update
requests from users, but when restoring cluster state from disk it meant
that we could not add new fields, as that would prevent reloading in
mixed version clusters.
This commit introduces a second parser that tolerates unknown fields for
each object that is stored in cluster state. Then we use this more
tolerant parser when parsing cluster state, but still use the strict
parser when parsing REST requests.
relates elastic/x-pack-elasticsearch#1732
Original commit: elastic/x-pack-elasticsearch@754e51d1ec
This is the x-pack side of https://github.com/elastic/elasticsearch/pull/24437
It changes two things, for the disable tests, it uses a valid endpoint instead
of a previously invalid endpoint that happened to return a 400 because the
endpoint was bad, regardless of if watcher was disabled.
The other change is to create the watches index by putting a watch using the
correct API, rather than manually creating the index. This is because
`RestHijackOperationAction` hijacks operations like this and stops accessing the
endpoint in a regular manner.
Original commit: elastic/x-pack-elasticsearch@3be78d9aea
Currently, the autodetect process has an `ignoreDowntime`
parameter which, when set to true, results to time being
skipped over to the end of the bucket of the first data
point received. After that, skipping time requires closing
and opening the job. With regard to datafeeds, this does not
work well with real-time requests which use the advance-time
API in order to ensure results are created for data gaps.
This commit improves this functionality by making it more
flexible and less ambiguous.
- flush API now supports skip_time parameter which
sends a control message to the autodetect process
telling it to skip time to a given value
- the flush API now also returns the last_finalized_bucket_end
time which allows clients to resume data searches correctly
- the datafeed start API issues a skip_time request when the
given start time is after the resume point. It then resumes
the search from the last_finalized_bucket_end time.
relates elastic/x-pack-elasticsearch#1913
Original commit: elastic/x-pack-elasticsearch@caa5fe8016
This fixes `testDeleteJobAfterMissingAliases` to not fail randomly.
The reason the test was failing is that at some point some aliases
are deleted and the cat-aliases API is called to verify they were
indeed deleted. This was checked by asserting an
index_not_found_exception was thrown by the cat-aliases request.
This was some times working as there were no other aliases. However,
that depends on whether other x-pack features had time to create their
infrastructure. For example, security creates an alias. When other
aliases had the time to be created, the cat-aliases request does not
fail and the test fails.
This commit simply changes the verification that the read/write
aliases were deleted by replacing the cat-aliases request with
two single get-alias requests.
Original commit: elastic/x-pack-elasticsearch@fe2c7b0cb4
The logging shows a wrong HTTP response status code from a previous
request. In addition the body now also gets logged, as debugging
is impossible otherwise.
Original commit: elastic/x-pack-elasticsearch@cc998cd587
This changes from collecting every index statistic to only what we actually want. This should help to reduce the performance impact of the lookup.
Original commit: elastic/x-pack-elasticsearch@80ae20f382
This is related to elastic/x-pack-elasticsearch#1217 and elastic/x-pack-elasticsearch#1896. Right now we are checking if an
incoming address is the loopback address or a special local addres. It
appears that we also need to check if that address is bound to a
network interface to be thorough in our localhost check.
This change mimicks how we check if localhost in `PatternRule`.
Original commit: elastic/x-pack-elasticsearch@a8947d6174
Depending on the random numbers fed to the analytics,
it is possible that the first planted anomaly ends up
in a different bucket due to the overlapping buckets feature.
Then that may result to a single interim bucket being available
due to overlapping buckets blocking the other interim bucket
from being considered.
I am removing the initial anomaly from the test as it is not useful
and it makes the test unstable.
relates elastic/x-pack-elasticsearch#1897
Original commit: elastic/x-pack-elasticsearch@aca7870708