This commit removes and now forbids all uses of
java.util.concurrent.ThreadLocalRandom across the codebase. The
underlying issue with ThreadLocalRandom is that it can not be
seeded. This means that if ThreadLocalRandom is used in production code,
then tests that cover any code path containing ThreadLocalRandom will be
prevented from being reproducible by use of ThreadLocalRandom. Instead,
using org.elasticsearch.common.random.Randomness#get will give
reproducible sources of random when running under tests and otherwise
still give an instance of ThreadLocalRandom when running as production
code.
On slow machines when this test randomly picks a large number of shards it can occasionally take more than 32.5 seconds to snapshot all shards. That is causing the test to miss the second to last assert in awaitsBusy at 32.5 seconds and then timeout in BlockingClusterStateListener at 60 seconds. Due to the timeout, the pending task queue is cleaned before the last awaitsBusy assert at 65 seconds and as a result the last assert runs on a completely empty queue and fails with a very confusing assert error.
This commit makes the timeout in BlockingClusterStateListener to occur after the last assert in assertBusyPendingTasks and therefore allows assertBusyPendingTasks to perform the last assert before cleaning the pending tasks queue takes place.
This commit also reduces the maximum number of shards used in the test to 10 in order to speed up this test.
We have a bug that makes all per-index bitset caches store bitsets for all
indices. In the case that you have many indices, which is fairly common with
time-based data, this could translate to a lot of wasted memory.
Closes#15820
`ScheduledThreadPoolExecutor` allows you to schedule tasks to run once or periodically at the future. If such a task throws an exception, that exception is caught and reported in the future that `ScheduledThreadPoolExecutor#schedule` returns. However, we typically do not capture the future / do not test it for errors. This results in exception being swallowed and not reported. To mitigate this we now wrap any command in a LoggingRunnable (already used for periodic tasks). Also, RunnableCommand is changed not to swallow exception but percolate them further for reporting by the future.
Closes#15824
This commit adds a new test that can throw an IOException at any point in time
and ensures that all previously synced documents can be successfully recovered after hitting
an excepiton.
Relates to #15788
Warmers are now barely useful and will be removed in 3.0. Note that this only
removes the warmer API and query-based warmers. We still have warmers internally
for eg. global ordinals.
Close#15607
Right now we define the same sort of methods as taking String arrays and
string varargs. We should standardize on one and varargs is easier to
call so lets use varargs!
This commit uses a CyclicBarrier to correctly and simply sychronize the
driver and test threads in
ClusterServiceIT#testClusterStateUpdateTasksAreExecutedInOrder.
This commit speeds up GeoShapeQueryTests by reducing the size of the random generated shapes and defaulting geo_shape indexes to use quadtree (more efficient for shapes) over geohash.
test failed, because now the percolator returns upto 10 matches whereas before this was unbounded. The test has been updated to take this in account by checking the total count instead of the number of matches
This commit cleans up some of the assertions in
TransportReplicationActionTests#runReplicateTest:
- use a Map to track actual vs. expected requests
- assert that no request was sent to the local node
- use RoutingTable#shardRoutingTable convenience method
- explicitly use false in boolean conditions
- clarify requests are expected on replica shards when assigned and
execution on replicas is true
- test ShardRouting equality when checking the failed shard request
This commit adds tighter assertions in
TransportReplicationActionTests#runReplicateTest that replication
requests are sent to the correct shard copies.
This commit addresses an issue when handling a failed replication
request against a relocating target shard. Namely, if a replication
request fails against the target of a relocation we currently fail both
the source and the target. This leads to an unnecessary
recovery. Instead, only the target of the relocation should be failed.
* Added percolator field mapper that extracts the query terms and indexes these terms with the percolator query.
* At percolate time these extracted terms are used to query percolator queries that are like to be evaluated. This can significantly cut down the time it takes to percolate. Whereas before all percolator queries were evaluated if they matches with the document being percolated.
* Changes made to percolator queries are no longer immediately visible, a refresh needs to happen before the changes are visible.
* By default the percolate api only returns upto 10 matches instead of returning all matching percolator queries.
* Made percolate more modular, so that it is easier to add unit tests.
* Added unit tests for the percolator.
Closes#12664Closes#13646
We today delete the translog-N.tlog file if any subsequent operation fails
but we might actually be in a good state if for instance the creation of the writer
failes after we sucessfully baked the new translog generation into the checkpoint. In this situation
we used to delete the translog-N.tlog file and failed on the next recovery of the translog with a
NoSuchFileException | FileNotFoundException just like in https://discuss.elastic.co/t/cannot-recover-index-because-of-missing-tanslog-files/38336
This commit changes the behavior and cleans up that limbo state on recovery if we already have a generation+1 file written but not baked into
the checkpoint we remove that file but only if the previous ckp file has already been renamed otherwise we know we can't be in this state.
We has a postIndex|DeleteUnderLock listener callback to load percolator
queries which is entirely private to the index shard in the meanwhile. Yet,
it still calls an external callback while holding an indexing lock which is scary
since we have no control over how long the operation could possibly take.
This commit decouples the percolator registry entirely from the ShardIndexingService
by pessimistically fetching percolator documents from the the engine using realtime get.
Even in situations where the same document is changed concurrently we will eventually end up
in the correct state without loosing an update. This also moves the index throtteling stats directly into
the engine to entirely remove the need for the dependency between InternalEngine and ShardIndexingService.
Relocating a non-primary shard from one node to another is actually done by recovering from the active
primary shard in the cluster, and not the node that we are logically relocating from.
Closes#15775
This commit addresses an issue where a cluster state task listener
throwing an exception could prevent other listeners from being notified,
and could prevent the executor from receiving notifications that a new
cluster state was published. Additionally, this commit also addresses a
similar issue for executors handling cluster state publication
notifications.