Today a common reason for a `ShardLockObtainFailedException` is when a
shard is removed from a node and then assigned straight back to it again
before the node has had a chance to shut the previous shard instance
down. For instance, this can happen if a node briefly leaves the cluster
holding a primary with no in-sync replicas.
The message in this case is typically as follows:
obtaining shard lock timed out after 5000ms, previous lock details: [shard creation] trying to lock for [shard creation]
This is pretty hard to interpret, and doesn't raise the important
question: "why didn't the shard shut down sooner?"
With this change we reword the message a bit, report the age of the
shard lock, and adjust the details to report that the lock is held by a
closing shard:
obtaining shard lock for [starting shard] timed out after [5000ms], lock already held for [closing shard] with age [12345ms]
Relates #38807
`ESTestCase#testRandomDateFormatterPattern` previously asserted that
round tripping `millis -> text -> millis` wouldn't lose any precision.
But some date formats don't include the time of day so, of course, this
could lose precision. This replaces that with an assertion that
`text -> millis -> text` doesn't lose precision. Which should be true
for any sane date format. Really, we're just trying to make sure that
the random date formats that we return are *fairly* sane.
We have seen a situation where the total search operations are higher
than expected. Unfortunately, we did not have enough info to figure it
out. This commit adds the failures to the error to provide more context
and adjusts the log level in case of failure to debug.
Per #35284, it looks like we changed this from a max field expansions limit to a soft limit using the `indices.query.bool.max_clause_count` dynamic cluster settting.
We only work with heap byte buffers at this point and those we can and do unwrap the
`byte[]` ourselves and use `BytesArray` instead of a needless level of indirection via `ByteBuffer`.
Backport of #60742.
This PR resurrects support for building Docker images based on one of
Red Hat's UBI images. It also adds support for running the existing
Docker tests against the image. The image is named
`elasticsearch-ubi8:<version>`.
I also changed the Docker build file uses enums instead strings in a lot
of places, for added rigour.
There is a corner case here in which during partial snapshot the index is
deleted right between starting the snapshot in the CS and the data node getting to work
on it, causing the data node the fail that shard snapshot and making the snapshot `PARTIAL`.
Closes#61208
* First crack at rewriting the CCR introduction.
* Emphasizing Kibana in configuring CCR (part one).
* Many more edits, plus new files.
* Fixing test case.
* Removing overview page and consolidating that information in the main page.
* Adding redirects for moved and deleted pages.
* Removing, consolidating, and adding redirects.
* Fixing duplicate ID in redirects and removing outdated reference.
* Adding test case and steps for recreating a follower index.
* Adding steps for managing CCR tasks in Kibana.
* Adding tasks for managing auto-follow patterns.
* Fixing glossary link.
* Fixing glossary link, again.
* Updating the upgrade information and other stuff.
* Apply suggestions from code review
* Incorporating review feedback.
* Adding more edits.
* Fixing link reference.
* Adding use cases for #59812.
* Incorporating feedback from reviewers.
* Apply suggestions from code review
* Incorporating more review comments.
* Condensing some of the steps for accessing Kibana.
* Incorporating small changes from reviewers.
Adds an important admonition for the built-in `metrics-*-*` and `logs-*-*` index
templates.
Updates several put index template snippets to include a priority.
Adds a method to make a random date `DateFormatter` pattern. We expect
this'll be useful for runtime fields to compate their formatting with
the standard date field.
Currently we occasionally can get ArithmeticException from parsing bad input
values on 'date' fields that are passed on even if 'ignore_malformed' is set.
This change adds this exception to the ones we already catch for malformed
values.
Closes#52634
* Some progress on failing runtime fields tests (bring #61098 to 7.x)
This breaks apart the a test for the `terms` aggregation into one that
work for runtime fields and one that doesn't.
Breaks up an integration test into one that runtime fields can run and
one that runtime fields have to skip. This is because runtime fields
don't have global ords and we assert things *about* global ords in the
test we have to skip.
Today we allocate a new `byte[]` for each document written to the
cluster state. Some of these documents may be quite large. We need a
buffer that's at least as large as the largest document, but there's no
need to use a fresh buffer for each document.
With this commit we re-use the same `byte[]` much more, only allocating
it afresh if we need a larger one, and using the buffer needed for one
round of persistence as a hint for the size needed for the next one.
The 7.x branch preserves the legacy discovery mechanism from 6.x purely
for running internal cluster tests; this mechanism is otherwise
completely untested and unsupported. However it is still technically
possible to use it outside of the test suite if you dig through the
source code to work out what settings need to be set. With this change
we make it impossible to use this mechanism in production.
Closes#61177
We leverage artifact transforms now when downloading and unpacking elasticsearch distributions.
This has the benefit of
- handcrafted extract tasks on the root project are not required.
- the general tight coupling to the root project has been removed.
- the overall required configurations required to handle a distribution have been reduced
- ElasticsearchDistribution has been simplified by making Extracted an ordinary Configuration
downloaded and unpacked external distributions are reused in later builds by been cached
in the gradle user home.
DistributionDownloadPlugin functional tests have been extended and ported
to DistributionDownloadPluginFuncTest.
* Fix java8 compliant Path calculation
With Gradle 6.6 we can now use the native support for the --release compile options.
As Gradle by default resolves the compatibility version from the release property we needed to workaround this in order to keep our current setup. An issue was raised at gradle to track this at https://github.com/gradle/gradle/issues/14141