* [Monitoring] Email actions for Cluster Alerts
* fix quotations in email fields
* move email vars to transform, and rename for snake_case
* add state to email subject for cluster status alert
* remove types field in kibana_settings search
* simplify email action condition script
* uppercase the state for the email subject
* only append state to email subject if alert is new
* show state in email subject even when alert is resolved
Original commit: elastic/x-pack-elasticsearch@e6fdd8d620
- Document refresh interval for role mapping files
- Fix obsolete shield reference in transport profile example
- Clarify that AD & PKI don't support run_as
- Fix logstash conf examples
- Clarify interaction of SSL settings and PKI realm settings
- Document PKI DN format, and recommend use of pki_dn metadata
- Provide more details about action.auto_create_index during setup
Original commit: elastic/x-pack-elasticsearch@49ddb12a7e
* A unit test for cli
* Licenses for cli
* Remove licenses for protos (no more deps)
* `SHOW TABLES` returns results in order (makes testing easier)
* Clean up embedded jdbc server
* Wire up embedded cli server
Original commit: elastic/x-pack-elasticsearch@b98aaf446b
This commit is related to elastic/x-pack-elasticsearch#1896. Currently setup mode means that the
password must be set post 6.0 for using x-pack. This interferes with
upgrade tests as setting the password fails without a properly
upgraded security index.
This commit loosens two aspects of the security.
1. The old default password will be accept in setup mode (requests
from localhost).
2. All request types can be submitted in setup mode.
Original commit: elastic/x-pack-elasticsearch@8a2a577038
This is step 2 of elastic/x-pack-elasticsearch#1604
This change stores `model_memory_limit` as a string with `mb` unit.
I considered using the `toString` method of `ByteSizeValue` but it
can lead to accuracy loss. Adding the fixed `mb` unit maintains
the accuracy, while making clear what unit the value is in.
Original commit: elastic/x-pack-elasticsearch@4dc48f0ce8
`gradle check -xforbiddenPatterns` now passes in jdbc.
This makes running the embedded HTTP server slightly more difficult,
you now have to add the following to your jvm arguments.
```
-ea -Dtests.rest.cluster=localhost:9200 -Dtests.embed.sql=true -Dtests.security.manager=false
```
Depending on your environment the embedded jdbc connection may give
spurious failures that look like:
```
org.elasticsearch.xpack.sql.jdbc.jdbc.JdbcException: RemoteTransportException[[node-0][127.0.0.1:9300][indices:data/read/search]]; nested: SearchPhaseExecutionException[]; nested: GeneralScriptException[Failed to compile inline script [( params.a0 > params.v0 ) && ( params.a1 > params.v1 )] using lang [painless]]; nested: CircuitBreakingException[[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead; this limit can be changed by the [script.max_compilations_per_minute] setting];
...
Caused by: Failed to execute phase [fetch],
..
Caused by: GeneralScriptException[Failed to compile inline script [( params.a0 > params.v0 ) && ( params.a1 > params.v1 )] using lang [painless]]; nested: CircuitBreakingException[[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead; this limit can be changed by the [script.max_compilations_per_minute] setting];
...
Caused by: CircuitBreakingException[[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead; this limit can be changed by the [script.max_compilations_per_minute] setting]
```
`gradle check` works around this by setting `script.max_compilations_per_minute`
to `1000`.
Another change is that we no longer support loading the test data by
uncommenting some code. Instead we load the test data into Elaticsearch
before the first test and we deleted it after the last test. This is
so that tests that required different test data can interoperate with
eachother. The spec tests all use the same test data but the metadata
tests do not.
Original commit: elastic/x-pack-elasticsearch@8b8f684ac1
ML has two types of custom cluster state:
1. jobs
2. datafeeds
These need to be parsed from JSON in two situations:
1. Create/update of the job/datafeed
2. Restoring cluster state on startup
Previously we used exactly the same parser in both situations, but this
severely limits our ability to add new features. This is because the
parser was very strict. This was good when accepting create/update
requests from users, but when restoring cluster state from disk it meant
that we could not add new fields, as that would prevent reloading in
mixed version clusters.
This commit introduces a second parser that tolerates unknown fields for
each object that is stored in cluster state. Then we use this more
tolerant parser when parsing cluster state, but still use the strict
parser when parsing REST requests.
relates elastic/x-pack-elasticsearch#1732
Original commit: elastic/x-pack-elasticsearch@754e51d1ec
This is the x-pack side of https://github.com/elastic/elasticsearch/pull/24437
It changes two things, for the disable tests, it uses a valid endpoint instead
of a previously invalid endpoint that happened to return a 400 because the
endpoint was bad, regardless of if watcher was disabled.
The other change is to create the watches index by putting a watch using the
correct API, rather than manually creating the index. This is because
`RestHijackOperationAction` hijacks operations like this and stops accessing the
endpoint in a regular manner.
Original commit: elastic/x-pack-elasticsearch@3be78d9aea
Currently, the autodetect process has an `ignoreDowntime`
parameter which, when set to true, results to time being
skipped over to the end of the bucket of the first data
point received. After that, skipping time requires closing
and opening the job. With regard to datafeeds, this does not
work well with real-time requests which use the advance-time
API in order to ensure results are created for data gaps.
This commit improves this functionality by making it more
flexible and less ambiguous.
- flush API now supports skip_time parameter which
sends a control message to the autodetect process
telling it to skip time to a given value
- the flush API now also returns the last_finalized_bucket_end
time which allows clients to resume data searches correctly
- the datafeed start API issues a skip_time request when the
given start time is after the resume point. It then resumes
the search from the last_finalized_bucket_end time.
relates elastic/x-pack-elasticsearch#1913
Original commit: elastic/x-pack-elasticsearch@caa5fe8016
- simplify handling of timezone in H2
- fix leaking threadpool in HttpServer
- update Csv tests
- keep the dates as long in internal Page
Original commit: elastic/x-pack-elasticsearch@43a804607f
This fixes `testDeleteJobAfterMissingAliases` to not fail randomly.
The reason the test was failing is that at some point some aliases
are deleted and the cat-aliases API is called to verify they were
indeed deleted. This was checked by asserting an
index_not_found_exception was thrown by the cat-aliases request.
This was some times working as there were no other aliases. However,
that depends on whether other x-pack features had time to create their
infrastructure. For example, security creates an alias. When other
aliases had the time to be created, the cat-aliases request does not
fail and the test fails.
This commit simply changes the verification that the read/write
aliases were deleted by replacing the cat-aliases request with
two single get-alias requests.
Original commit: elastic/x-pack-elasticsearch@fe2c7b0cb4
The logging shows a wrong HTTP response status code from a previous
request. In addition the body now also gets logged, as debugging
is impossible otherwise.
Original commit: elastic/x-pack-elasticsearch@cc998cd587
This changes from collecting every index statistic to only what we actually want. This should help to reduce the performance impact of the lookup.
Original commit: elastic/x-pack-elasticsearch@80ae20f382
Flows time zones through the `QueryInitRequest` and into the
`ExpressionBuilder` which attaches the time zones to date/time
expressions. Modifies the code that generates date aggs,
scripts, and extracts results to use the time zones.
Original commit: elastic/x-pack-elasticsearch@d6682580d1