* A unit test for cli
* Licenses for cli
* Remove licenses for protos (no more deps)
* `SHOW TABLES` returns results in order (makes testing easier)
* Clean up embedded jdbc server
* Wire up embedded cli server
Original commit: elastic/x-pack-elasticsearch@b98aaf446b
`gradle check -xforbiddenPatterns` now passes in jdbc.
This makes running the embedded HTTP server slightly more difficult,
you now have to add the following to your jvm arguments.
```
-ea -Dtests.rest.cluster=localhost:9200 -Dtests.embed.sql=true -Dtests.security.manager=false
```
Depending on your environment the embedded jdbc connection may give
spurious failures that look like:
```
org.elasticsearch.xpack.sql.jdbc.jdbc.JdbcException: RemoteTransportException[[node-0][127.0.0.1:9300][indices:data/read/search]]; nested: SearchPhaseExecutionException[]; nested: GeneralScriptException[Failed to compile inline script [( params.a0 > params.v0 ) && ( params.a1 > params.v1 )] using lang [painless]]; nested: CircuitBreakingException[[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead; this limit can be changed by the [script.max_compilations_per_minute] setting];
...
Caused by: Failed to execute phase [fetch],
..
Caused by: GeneralScriptException[Failed to compile inline script [( params.a0 > params.v0 ) && ( params.a1 > params.v1 )] using lang [painless]]; nested: CircuitBreakingException[[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead; this limit can be changed by the [script.max_compilations_per_minute] setting];
...
Caused by: CircuitBreakingException[[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead; this limit can be changed by the [script.max_compilations_per_minute] setting]
```
`gradle check` works around this by setting `script.max_compilations_per_minute`
to `1000`.
Another change is that we no longer support loading the test data by
uncommenting some code. Instead we load the test data into Elaticsearch
before the first test and we deleted it after the last test. This is
so that tests that required different test data can interoperate with
eachother. The spec tests all use the same test data but the metadata
tests do not.
Original commit: elastic/x-pack-elasticsearch@8b8f684ac1
Currently, the autodetect process has an `ignoreDowntime`
parameter which, when set to true, results to time being
skipped over to the end of the bucket of the first data
point received. After that, skipping time requires closing
and opening the job. With regard to datafeeds, this does not
work well with real-time requests which use the advance-time
API in order to ensure results are created for data gaps.
This commit improves this functionality by making it more
flexible and less ambiguous.
- flush API now supports skip_time parameter which
sends a control message to the autodetect process
telling it to skip time to a given value
- the flush API now also returns the last_finalized_bucket_end
time which allows clients to resume data searches correctly
- the datafeed start API issues a skip_time request when the
given start time is after the resume point. It then resumes
the search from the last_finalized_bucket_end time.
relates elastic/x-pack-elasticsearch#1913
Original commit: elastic/x-pack-elasticsearch@caa5fe8016
- simplify handling of timezone in H2
- fix leaking threadpool in HttpServer
- update Csv tests
- keep the dates as long in internal Page
Original commit: elastic/x-pack-elasticsearch@43a804607f
This fixes `testDeleteJobAfterMissingAliases` to not fail randomly.
The reason the test was failing is that at some point some aliases
are deleted and the cat-aliases API is called to verify they were
indeed deleted. This was checked by asserting an
index_not_found_exception was thrown by the cat-aliases request.
This was some times working as there were no other aliases. However,
that depends on whether other x-pack features had time to create their
infrastructure. For example, security creates an alias. When other
aliases had the time to be created, the cat-aliases request does not
fail and the test fails.
This commit simply changes the verification that the read/write
aliases were deleted by replacing the cat-aliases request with
two single get-alias requests.
Original commit: elastic/x-pack-elasticsearch@fe2c7b0cb4
The logging shows a wrong HTTP response status code from a previous
request. In addition the body now also gets logged, as debugging
is impossible otherwise.
Original commit: elastic/x-pack-elasticsearch@cc998cd587
This changes from collecting every index statistic to only what we actually want. This should help to reduce the performance impact of the lookup.
Original commit: elastic/x-pack-elasticsearch@80ae20f382
Flows time zones through the `QueryInitRequest` and into the
`ExpressionBuilder` which attaches the time zones to date/time
expressions. Modifies the code that generates date aggs,
scripts, and extracts results to use the time zones.
Original commit: elastic/x-pack-elasticsearch@d6682580d1
This is related to elastic/x-pack-elasticsearch#1217 and elastic/x-pack-elasticsearch#1896. Right now we are checking if an
incoming address is the loopback address or a special local addres. It
appears that we also need to check if that address is bound to a
network interface to be thorough in our localhost check.
This change mimicks how we check if localhost in `PatternRule`.
Original commit: elastic/x-pack-elasticsearch@a8947d6174
Depending on the random numbers fed to the analytics,
it is possible that the first planted anomaly ends up
in a different bucket due to the overlapping buckets feature.
Then that may result to a single interim bucket being available
due to overlapping buckets blocking the other interim bucket
from being considered.
I am removing the initial anomaly from the test as it is not useful
and it makes the test unstable.
relates elastic/x-pack-elasticsearch#1897
Original commit: elastic/x-pack-elasticsearch@aca7870708
There are multiple references to this section in different areas of the
documentation. This commit brings back this section to fix the build.
A more extensive PR updating the documentation for "no default
password" work will follow up.
Original commit: elastic/x-pack-elasticsearch@0378e78c8a
This section has been removed from setting-up-authentication. This
commit removes a reference to this section that no longer exists.
Original commit: elastic/x-pack-elasticsearch@43aa0077f9
This commit removes the system key from master and changes watcher to use a secure setting instead
for the encryption key.
Original commit: elastic/x-pack-elasticsearch@5ac95c60ef
This is related to elastic/x-pack-elasticsearch#1217. This PR removes the default password of
"changeme" from the reserved users.
This PR adds special behavior for authenticating the reserved users. No
ReservedRealm user can be authenticated until its password is set. The
one exception to this is the elastic user. The elastic user can be
authenticated with an empty password if the action is a rest request
originating from localhost. In this scenario where an elastic user is
authenticated with a default password, it will have metadata indicating
that it is in setup mode. An elastic user in setup mode is only
authorized to execute a change password request.
Original commit: elastic/x-pack-elasticsearch@e1e101a237
Migrates the remaining comparison tests to integration tests
so they run easilly in gradle against real Elasticsearch nodes.
This breaks running them in the IDE without running an Elasticsearch
node and setting `-Drest.test.cluster=localhost:9200`. You can run
a compatible node with `cd sql-clients/jdbc && gradle run`.
Original commit: elastic/x-pack-elasticsearch@90a7c2dae7