Today the HTTP exporter settings without the exporter type having been
configured to HTTP. When it is time to initialize the exporter, we can
blow up. Since this initialization happens on the cluster state applier
thread, it is quite problematic that we do not reject settings updates
where the type is not configured to HTTP, but there are HTTP exporter
settings configured. This commit addresses this by validating that the
exporter type is not only set, but is set to HTTP.
The "code_user" and "code_admin" reserved roles existed to support
code search which is no longer included in Kibana.
The "kibana_system" role included privileges to read/write from the
code search indices, but no longer needs that access.
Backport of: #50068
This commit tweaks the workaround introduced in #49211 to support
Gradle 6.0. In the workaround, we specifically override the address
the Gradle daemon binds to by passing the desired address via the
OPENSHIFT_IP environment variable. This works fine for builds using
Gradle 6.0, but for older Gradle versions this causes issues with
inter-daemon communication, specifically when we build BWC branches
not on Gradle 6.0. The fix here is to strip that environment variable
out when building the target BWC branch if that branch is on an
older Gradle version.
This is all temporary and will be removed when this bug fix is released
in Gradle 6.1.
Closes#50025
* [DOCS] Document JVM node stats
Documents the `jvm` parameters returned by the `_nodes/stats` API.
Co-Authored-By: James Baiera <james.baiera@gmail.com>
* [ML] Add graceful retry for anomaly detector result indexing failures (#49508)
All results indexing now retry the amount of times configured in `xpack.ml.persist_results_max_retries`. The retries are done in a semi-random, exponential backoff.
* fixing test
When decoupling the pipeline reduction from regular agg reduction,
MultiBucket aggs were modified to reduce their bucket's pipeline
aggs first before reducing the sibling aggs. This modification
was missed on SingleBucket aggs, meaning any SingleBucket would
fail to reduce any pipeline sub-aggs
The example snippets in the percentile rank agg docs use a test dataset
named `latency`, which is generated from docs/gradle.build.
At some point the dataset and example snippets were updated, but the
text surrounding the snippets was not. This means the text and the
example snippets shown no longer match up.
This corrects that by changing the snippets using /TESTRESPONSE magic comments.
See discussion in #50047 (comment).
There are reproducible issues with Thread#suspend in Jdk11 and Jdk12 for me locally
and we have one failure for each on CI.
Jdk8 and Jdk13 are stable though on CI and in my testing so I'd selectively disable
this test here to keep the coverage. We aren't using suspend in production code so the
JDK bug behind this does not affect us.
Closes#50047
* Remove Unused Single Delete in BlobStoreRepository
There are no more production uses of the non-bulk delete or the delete that throws
on missing so this commit removes both these methods.
Only the bulk delete logic remains. Where the bulk delete was derived from single deletes,
the single delete code was inlined into the bulk delete method.
Where single delete was used in tests it was replaced by bulk deleting.
* Better Logging GCS Blobstore Mock
Two things:
1. We should just throw a descriptive assertion error and figure out why we're not reading a multi-part instead of
returning a `400` and failing the tests that way here since we can't reproduce these 400s locally.
2. We were missing logging the exception on a cleanup delete failure that coincides with the `400` issue in tests.
Relates #49429
This commit ensures deseriable a GetResult from StreamInput does not
leave metaFields and documentFields null. This could cause an NPE in
situations where upsert response for a document that did not exist is
passed back to a node that forwarded the upsert request.
closes#48215
This adds "enterprise" as an acceptable type for a license loaded
through the PUT _license API.
Internally an enterprise license is treated as having a "platinum"
operating mode.
The handling of License types was refactored to have a new explicit
"LicenseType" enum in addition to the existing "OperatingMode" enum.
By default (in 7.x) the GET license API will return "platinum" when an
enterprise license is active in order to be compatible with existing
consumers of that API.
A new "accept_enterprise" flag has been introduced to allow clients to
opt-in to receive the correct "enterprise" type.
Backport of: #49223
Running tools requires a shell. This should be the shell setup by the
base packaging tests, but currently tests must pass in their own shell.
This commit begins to make running tools easier by eliminating the shell
argument, instead keeping the shell as part of the Installation (which
can eventually be passed through from the test itself on installation).
The variable names for each tool are also simplified.
The testclusters registory is a singleton extension element added to the
root project which tracks which test clusters are used throughout the
multi project. But having the same name as the extension used to
configure test clusters within each subprojects breaks using a single
project for an external plugin. This commit renames the registry
extension to make it unique.
closes#49787
This commit goes from using a JvmArgumentProvider to using the normal
Test task APIs for passing the `HeapDumpOnOutOfMemoryError` JVM
argument. This makes it simpler for subprojects, such as lang-painless
to override this setting if necessary.
Closes#49117
(cherry picked from commit e97c38ff8e862abdc1d7816c66f9869ed216031f)
Currently we do not know the size of the transport header (map of
request response headers, features array, and action name). This means
that we must read the entire transport message to dependably act on the
headers. This commit adds an int indicating the size of the transport
headers. With this addition we can act upon the headers prior to reading
the entire message.
* CSV ingest processor (#49509)
This change adds new ingest processor that breaks line from CSV file into separate fields.
By default it conforms to RFC 4180 but can be tweaked.
Closes#49113
We have a long history of advancing the required compiler to the newest
JDK. JDK 13 has been with us for awhile, but we were blocked from
upgrading since Gradle was not compatible with JDK 13. With the
advancement in our project to Gradle 6 which supports JDK 13, we can now
advance our minimum compiler version. This commit updates the minimum
compiler version to JDK 13.
The `sparse_vector` REST tests occasionally fail on 7.x because we don't receive the expected response headers with deprecation warnings.
One theory as to what is happening is that there is an extra empty index present in addition to the test index. Since the search doesn't specify an index name, it hits both the test index and this extra empty index and shard responses from the extra index don't produce deprecation warnings. If not all shard responses contain the warning headers, then certain deprecation warnings can be lost (due to the bug described in #33936).
This PR tries to harden the `sparse_vector` tests by always specifying the index name during a search. This doesn't fix the root causes of the issue, but is good practice and can help avoid intermittent failures.
Addresses #49383.
The `ClassificationIT.testTwoJobsWithSameRandomizeSeedUseSameTrainingSet`
test was previously set up to just have 10 rows. With `training_percent`
of 50%, only 5 rows will be used for training. There is a good chance that
all 5 rows will be of one class which results to failure.
This commit increases the rows to 100. Now 50 rows should be used for training
and the chance of failure should be very small.
Backport of #50072
Batch deletes get a response for every delete request, not just those that actually hit an existing blob.
The fact that we only responded for existing blobs leads to a degenerate response that throws a parse exception if a batch delete only contains non-existant blobs.
Watcher logs when actions fail in ActionWrapper, but failures to
generate an email attachment are not logged and we thus only know the
type of the exception and not where/how it occurred.
Adjusts the subclasses of `TransportMasterNodeAction` to use their own loggers
instead of the one for the base class.
Relates #50056.
Partial backport of #46431 to 7.x.
As we discussed in #36371, interval notation is confusing to some users. This makes the intention clearer by just explaining inclusivity and exclusivity in the docs.
Return a 401 in all cases when a request is submitted with an
access token that we can't consume. Before this change, we would
throw a 500 when a request came in with an access token that we
had generated but was then invalidated/expired and deleted from
the tokens index.
Resolves: #38866
Backport of #49736
This makes two changes to the catch node:
1. Use SDeclaration to replace independent variable usage.
2. Use a DType to set a "minimum" exception type - this allows us to require
users to continue using Exception as "minimum" type for catch blocks, but
for us to internally catch Error/Throwable. This is a required step to
removing custom try/catch blocks from SClass.
This refactor bridges some gaps between a long-running feature branch (#49268) and the master branch.
First of all, this PR gives our PackagingTestCase class some methods to start and stop Elasticsearch that will switch on packaging type and delegate to the appropriate utility class for deb/RPM packages, archive installations, and Docker. These methods should be very useful as we continue group tests by function rather than by package or platform type.
Second, the password-protected keystore tests have a particular need to read the output of Elasticsearch startup commands. In order to make this easer to do, some commands now return Shell.Result objects so that tests can check over output to the shell. To that end, there's also an assertElasticsearchFailure method that will handle checking for startup failures for the various distribution types.
There is an update to the Powershell startup script for archives that asynchronously redirects the output of the Powershell process to files that we can read for errors.
Finally, we use the ES_STARTUP_SLEEP_TIME environment variable to make sure that our startup commands wait long enough before exiting for errors to make it to the standard output and error streams.
Multiple version ranges are allowed to be used in section skip in yml
tests. This is useful when a bugfix was backported to latest versions
and all previous releases contain a wire breaking bug.
examples:
6.1.0 - 6.3.0, 6.6.0 - 6.7.9, 7.0 -
- 7.2, 8.0.0 -
backport #50014
Removes a reference to shadow replicas from the cat shards API docs
and a comment in cluster/routing/UnassignedInfo.java.
Shadow replicas were removed with #23906.