The build.snapshot flag used by the main build was being propagated down
into the bwc snapshot builds, which is not correct. The bwc subprojects
are always meant to be snapshot builds, or null if they do not
exist. Marking these builds as non snapshots threw the release off as it
was looking for -SNAPSHOT builds.
Relates #28641
We need to investigate why startup is taking so long in CI for
standalone tests. This commit adds logging for bootstrap and network
code that is executed before the node starts initializing in case this
is the source of the trouble.
Relates #28659
If a tragic even happens while we are refreshing a searcher/reader the engine can open new files on a store that is already closed
For instance the following CI job failed because a merge was concurrently called on a failing shard:
https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+oracle-java10-periodic/84
This change increments the ref count of the store during a refresh in order to postpone the closing after a tragic event.
When installing a meta plugin we check the dependency of each sub plugin during the installation.
Though if the extended plugin is part of the meta plugin the installation fails because we only check for plugins that are
already installed. This change is a workaround that extracts all plugins (even those that are not fully installed yet) when the dependency check
is made during the installation. Note that this is how the plugin installation worked before https://github.com/elastic/elasticsearch/pull/28581.
By default we wait for 30 seconds for nodes to start. Looking at CI,
even for passing builds, the startup time is frequently over 20 seconds
(grep for `#wait (Thread[Task worker for ':',5,main]) completed` in any
console log), which is close to the threshold. This commit sets a
default timeout of 120 seconds instead, so that REST tests are less
likely to hit a timeout.
This commit moves the semantic validation (like which version a plugin
was built for or which java version it is compatible with) from reading
a plugin descriptor, leaving the checks on the format of the descriptor
intact.
relates #28540
Version Utils did not previously have logic that removed the last majors
minor snapshot if there was a next bugfix and maintenance bugfix
release. This adds the logic and fixes some broken assumptions in tests
as well.
relates #28505
* NXYSignificanceHeuristic.java: implementation of equality would have
failed with a ClassCastException when comparing to another type.
Replaced with the Eclipse generated form.
This commit removes the extra layer of all plugin files existing under
"elasticsearch" within plugin zips. This simplifies building plugin zips
and removes the need for special logic of modules vs plugins.
The build.snapshot was mistakenly passed in to every snapshot version,
so when release tests were run, these versions were mistaken as released
entities and could not be found in maven, because they do not
exist. This fix removes that bug in logic, and always makes them proper
snapshots. This has a benefit of cleaning up the VersionUtilsTests
because they no longer rely on different sets of versions to check
against, which was also a bug.
Today we use the persisted global checkpoint to calculate the starting
seqno in peer-recovery. However we do not check whether the translog
actually belongs to the existing Lucene index when reading the global
checkpoint. In some rare cases if the translog does not match the Lucene
index, that recovering replica won't be able to complete its recovery.
This can happen as follows.
1. Replica executes a file-based recovery
2. Index files are copied to replica but crashed before finishing the recovery
3. Replica starts recovery again with seq-based as the copied commit is safe
4. Replica fails to open engine because translog and Lucene index are not matched
5. Replica won't be able to recover from primary
This commit enforces the translogUUID requirement when reading the
global checkpoint directly from the checkpoint file.
Relates #28435
We suspect a build failure might be due to a startup timeout, but there is
insufficient information in the logs. This change enhances the information
available so we will be better-informed next time.
This commit changes the state format that was previously passed in to
`MetaDataStateFormat` to always use Smile. This doesn't actually change the
format, since we have used Smile for writing the format since at least 5.0. This
removes the automatic detection of the state format when reading state, since
any state that could be processed in 6.x and 7.x would already have been written
in Smile format.
This is work towards removing the deprecated methods in the XContent code where
we do automatic content-type detection.
Relates to #28504
Currently if a yaml test has a teardown and a test is failing then
a stash dump of a request in the teardown is logged instead of
a stash dump of a request in the test itself.
By handling the logging of stash dumps separately for setup, tests and
teardown yaml sections we shouldn't miss the stash dump of request/response
that is actually causing the yaml test to fail.
This commit forces the depth_first mode for `terms` aggregation that contain a sub-aggregation that need to access the score of the document
in a nested context (the `terms` aggregation is a child of a `nested` aggregation). The score of children documents is not accessible in
breadth_first mode because the `terms` aggregation cannot access the nested context.
Close#28394
* Search option terminate_after does not handle post_filters and aggregations correctly
This change fixes the handling of the `terminate_after` option when post_filters (or min_score) are used.
`post_filter` should be applied before `terminate_after` in order to terminate the query when enough document are accepted
by the post_filters.
This commit also changes the type of exception thrown by `terminate_after` in order to ensure that multi collectors (aggregations)
do not try to continue the collection when enough documents have been collected.
Closes#28411
The is a follow up to #28567 changing the method used to capture stack traces, as requested
during the review. Instead of creating a throwable, we explicitly capture the stack trace of the
current thread. This should Make Jason Happy Again ™️ .
When Elasticsearch is run as a service we should not use the console
logger otherwise we end up duplicating logging (to the Elasticsearch
logs and whereever standard output is captured). Previously we disabled
the console logger when started as a service using systemd (otherwise
the console logs are duplicated to the journal). This commit does the
same for the Windows service, starting Elasticsearch with the --quiet
flag to avoid standard output being written to the service stdout logs.
Relates #28618
Generalizing BWC building so that there is less code to modify for a release. This ensures we do not
need to think about what major or minor version is in the gradle code. It follows the general rules of the
elastic release structure. For more information on the rules, see the VersionCollection's javadoc.
This also removes the additional bwc snapshots that will never be released, such as 6.0.2, which were
being built and tested against every time we ran bwc tests.
Additionally, it creates 4 new projects that correspond to the different types of snapshots that may exist
for a given version. Its possible to now run those individual tasks to work out bwc logic whereas
previously it was impossible and the entire suite of bwc tests had to be run to work out any logic
changes in the build tools' bwc project. Please note that if the project does not make sense for the
version that is current, that an error will be thrown from that individual project if an attempt is made to
run it.
This should allow for automating the version bumps as well, since it removes all the hardcoded version
logic from the configs.
When elasticsearch was originally moved to gradle, the "provided" equivalent in maven had to be done through a plugin. Since then, gradle added the "compileOnly" configuration. This commit removes the provided plugin and replaces all uses with compileOnly.
Tests use the (internal) gradle progress logger. Since aroudn gradle
4.0, loggers can have children loggers which each get their own line of
updating output. This commit improves the test status output to give
each jvm its own status line, as well as lines for suite and test
counts. It also fixes an issue where the progress logger was not marked
as completed, which caused its last output to never be cleared.
The bwc tests can be disabled in order to facilitate commits necessary
to older branches to maintain backcompat. A check already exists to
ensure this flag is not left disabled too long (once a day by the
branchConsistency check). However, it can be surprising if you try
running bwc tests explicitly and they look like nothing is happening.
This commit adds a warning during configuration to ensure it is clear
the bwc tests are disabled and enforces a link to a PR which is in the
process of being backported.
This removes all the server references to the deprecated `ParseField.match`
method in favor of the method that passes in the deprecation logger.
Relates to #28504
The bug was caused because the ScriptService had no reference to a ClusterState instance,
because it received the ClusterState after the PipelineStore. This only is the case
after a restart.
A bad side effect is that during a restart, any pipeline to be loaded after the pipeline that uses a stored script,
was never loaded, which caused many pipeline to be missing in bulk / index request api calls.
After copying over the Lucene segments during peer recovery, we call cleanupAndVerify which removes all other files in the directory and which then calls getMetadata to check if the resulting files are a proper index. There are two issues with this:
- the directory is not fsynced after the deletions, so that the call to getMetadata, which lists files in the directory, can get a stale view, possibly seeing a deleted corruption marker (which leads to the exception seen in #28435)
- failing to delete a corruption marker should result in a hard failure, as the shard is otherwise unusable.
The TikaImpl#parse method comment sounds like this method is only used
in the same package for testing, but AttachmentProcessor uses it outside
of testing, so we should remove this comment.
Tika parsers need accessDeclaredMembers because ZipFile needs
accessDeclaredMembers on JDK 10. This commit guards adding this
permission to parsers so that the permission is only granted on JDK
10. Additionally, we add an assertion that forces us to check if the
permission is still needed in JDK 11.
Relates #28603
Tests on jdk10 were failing because of a change in its ZipFile implementation
that now needs `accessDeclaredMembers` permissions. This change adds
the missing permission to the plugins security policy and TikaImpl.
Closes#28568
The shard not-available exceptions are currently ignored in the
replication as the best effort avoids failing not-yet-ready shards.
However these exceptions can also happen from fully active shards. If
this is the case, we may have skipped important failures from replicas.
Since #28049, only fully initialized shards are received write requests.
This restriction allows us to handle all exceptions in the replication.
There is a side-effect with this change. If a replica retries its peer
recovery second time after being tracked in the replication group, it
can receive replication requests even though it's not-yet-ready. That
shard may be failed and allocated to another node even though it has a
good lucene index on that node.
This PR does not change the way we report replication errors to users,
hence the shard not-available exceptions won't be reported as before.
Relates #28049
Relates #28534
Today we acquire a permit from the shard to coordinate between indexing operations, recoveries and other state transitions. When we leak an permit it's practically impossible to find who the culprit is. This PR add stack traces capturing for each permit so we can identify which part of the code is responsible for acquiring the unreleased permit. This code is only active when assertions are active.
The output is something like:
```
java.lang.AssertionError: shard [test][1] on node [node_s0] has pending operations:
--> java.lang.RuntimeException: something helpful 2
at org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:223)
at org.elasticsearch.index.shard.IndexShard.<init>(IndexShard.java:322)
at org.elasticsearch.index.IndexService.createShard(IndexService.java:382)
at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:514)
at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:143)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:552)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:529)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:231)
at org.elasticsearch.cluster.service.ClusterApplierService.lambda$callClusterStateAppliers$6(ClusterApplierService.java:498)
at java.base/java.lang.Iterable.forEach(Iterable.java:75)
at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:495)
at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:482)
at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:432)
at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:161)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:566)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:244)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:207)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.base/java.lang.Thread.run(Thread.java:844)
--> java.lang.RuntimeException: something helpful
at org.elasticsearch.index.shard.IndexShardOperationPermits.acquire(IndexShardOperationPermits.java:223)
at org.elasticsearch.index.shard.IndexShard.<init>(IndexShard.java:311)
at org.elasticsearch.index.IndexService.createShard(IndexService.java:382)
at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:514)
at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:143)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:552)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:529)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:231)
at org.elasticsearch.cluster.service.ClusterApplierService.lambda$callClusterStateAppliers$6(ClusterApplierService.java:498)
at java.base/java.lang.Iterable.forEach(Iterable.java:75)
at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:495)
at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:482)
at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:432)
at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:161)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:566)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:244)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:207)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.base/java.lang.Thread.run(Thread.java:844)
```
Currently the master node logs a warning message whenever it receives a
failed shard request. However, this can be noisy because
- Multiple failed shard requests can be issued for a single shard
- Failed shard requests can be still issued for an already failed shard
This commit moves the log-warn to AllocationService in which the failing
shard action actually happens. This is another prerequisite step in
order to not ignore the shard not-available exceptions in the
replication.
Relates #28534
The queue size test has a race condition. Namely the offering thread can
run so quickly completing all of its offering iterations before the
queue size thread ever has a chance to run a single size poll
iteration. This means that the size will never actually be polled and
the test can spuriously fail. What we really want to do here, since this
test is checking for a race condition between polling the size of the
queue and offers to the queue, we want to execute each iteration in
lockstep giving the threads multiple changes for the race between
polling the size and offers to occur. This commit addresses this by
running the two threads in lockstep for multiple iterations so that they
have multiple chances to race.
Relates #28584