Both TopDocsCollector and LeafCollector were being kept around at the aggregator level. In case the nested aggregator would do a post collection then this could cause pushing down docids to top hits child aggregators that already moved the next LeafCollector (causing assertions to trip and incorrect results).
By keeping track of the LeafCollector in a seperate map at the leaf bucket level this problem can simply not happen any more as the place holding LeafCollector is no longer shared.
Also LeafCollector instances for TopDocsCollectors are no longer pre-created as the beginning a new segment is evaluated. There is no guarantee that TopHitsAggregator encounters a document for a particular bucket and there has to be logic to create LeafCollector instances which have not been seen before.
Closes#26738
With 6.0 rc1 we now publish sha512 checksums for official plugins.
However, in order to ease the pain for plugin authors, this commit adds
backcompat to still allow sha1 checksums. Also added tests for
checksums.
Closes#26746
Adds the wait_for_active_shards parameter to the index open command. Similar to the index creation command, the index open command will now, by default, wait until the primaries have been allocated.
Closes#20937
It is the exciting return of the global checkpoint background
sync. Long, long ago, in snapshot version far, far away we had and only
had a global checkpoint background sync. This sync would fire
periodically and send the global checkpoint from the primary shard to
the replicas so that they could update their local knowledge of the
global checkpoint. Later in time, as we sped ahead towards finalizing
the initial version of sequence IDs, we realized that we need the global
checkpoint updates to be inline. This means that on a replication
operation, the primary shard would piggy back the global checkpoint with
the replication operation to the replicas. The replicas would update
their local knowledge of the global checkpoint and reply with their
local checkpoint. However, this could allow the global checkpoint on the
primary to advance again and the replicas would fall behind in their
local knowledge of the global checkpoint. If another replication
operation never fired, then the replicas would be permanently behind. To
account for this, we added one more sync that would fire when the
primary shard fell idle. However, this has problems:
- the shard idle timer defaults to five minutes, a long time to wait
for the replicas to learn of the new global checkpoint
- if a replica missed the sync, there was no follow-up sync to catch
them up
- there is an inherent race condition where the primary shard could
fall idle mid-operation (after having sent the replication request to
the replicas); in this case, there would never be a background sync
after the operation completes
- tying the global checkpoint sync to the idle timer was never natural
To fix this, we add two additional changes for the global checkpoint to
be synced to the replicas. The first is that we add a post-operation
sync that only fires if there are no operations in flight and there is a
lagging replica. This gives us a chance to sync the global checkpoint to
the replicas immediately after an operation so that they are always kept
up to date. The second is that we add back a global checkpoint
background sync that fires on a timer. This timer fires every thirty
seconds, and is not configurable (for simplicity). This background sync
is smarter than what we had previously in the sense that it only sends a
sync if the global checkpoint on at least one replica is lagging that of
the primary. When the timer fires, we can compare the global checkpoint
on the primary to its knowledge of the global checkpoint on the replicas
and only send a sync if there is a shard behind.
Relates #26591
Add checks for special permissions before reading hdfs stream data. Also adds test from
readonly repository fix. MiniHDFS will now start with an existing repository with a single snapshot
contained within. Readonly Repository is created in tests and attempts to list the snapshots
within this repo.
When using a bulk processor, the thread context was not preserved for the flush runnable which is
executed in another thread in the thread pool. This change wraps the flush runnable in a context
preserving runnable so that the headers and transients from the creation time of the bulk processor
are available during the execution of the flush.
Closes#26596
The `fielddata` field and the use of the `_name` field in the short syntax of the range
query have been deprecated in 5.0 and can be removed.
The same goes for the deprecated `score_mode` field in HasParentQueryBuilder,
the deprecated `like_text`, `ids` and `docs` parameter in the `more_like_this` query,
the deprecated query name in the short version of the `regexp` query, and several
deprecated alternative field names in other query builders.
The `type` field has been deprecated in 5.0 and can be removed. It has been
replaced by using the MatchPhraseQueryBuilder or the
MatchPhrasePrefixQueryBuilder. The `slop` field has also been deprecated and can
be removed, the phrase and phrase prefix query builders still provide this
parameter.
- Removes mutual dependency between GatewayMetaState and TransportNodesListGatewayMetaState
- Deguices MetaDataIndexUpgradeService
- Deguices GatewayMetaState
- Makes Gateway the master-level component that is only responsible for coordinating the state recovery
Request class is currently package protected, making it difficult for
the users to extend the RestHighLevelClient and to use its protected
methods to execute requests. This commit makes the Request class public
and changes few methods of RestHighLevelClient to be protected.
The nested aggregator now buffers all bucket ords per parent document and
emits all bucket ords for a parent document's nested document once. This way
the nested documents document DocIdSetIterator gets used once per bucket
instead of wrapping the nested aggregator inside a multi bucket aggregator,
which was the current solution upto now. This allows sorting by buckets
under a nested bucket.
Closes#16838
When adding file based discovery, we added a fallback when the discovery
type was set to zen (the default, so everyone got this warning). This
commit removes the fallback for 6.0. Setting file discovery should now
happen explicitly through the hosts_provider setting.
closes#26661
This assertion is wrong because the global checkpoint on a promoted
primary can be lagging the replicas until it catches up after through
resyncs, ongoing indexing operations and removing the old primary from
the in-sync set.
TemplateUpgradeService might get stuck in repeatedly upgrading templates after upgrade to 5.6.0. This is caused by shuffling mappings definition in the template during template serialization. This commit makes the template serialization consistent.
Closes#26673
Restoring a shard from snapshot throws the primary back in time violating assumptions and bringing the validity of global checkpoints in question. To avoid problems, we should make sure that a shard that was restored will never be the source of an ops based recovery to a shard that existed before the restore. To this end we have introduced the notion of `histroy_uuid` in #26577 and required that both source and target will have the same history to allow ops based recoveries. This PR make sure that a shard gets a new uuid after restore.
As suggested by @ywelsch , I derived the creation of a `history_uuid` from the `RecoverySource` of the shard. Store recovery will only generate a uuid if it doesn't already exist (we can make this stricter when we don't need to deal with 5.x indices). Peer recovery follows the same logic (note that this is different than the approach in #26557, I went this way as it means that shards always have a history uuid after being recovered on a 6.x node and will also mean that a rolling restart is enough for old indices to step over to the new seq no model). Local shards and snapshot force the generation of a new translog uuid.
Relates #10708Closes#26544
This commit moves the pre-6.0 node checkpoint constant from
SequenceNumbersService to SequenceNumbers so it can chill with the other
sequence number-related constants.
Relates #26690
Request bodys that only consists of a String value can lead to endless loops in the
parser of several rest requests like e.g. `_count`. Up to 5.2 this seems to have been caught
in the logic guessing the content type of the request, but since then it causes the node to
block. This change introduces checks for receiving a valid xContent object before starting the
parsing in RestActions#parseTopLevelQueryBuilder().
Closes#26083
Adds several small whitelist data structures and a new Whitelist class to separate the idea of loading a whitelist from the actual Painless Definition class. This is the first step of many in allowing users to define custom whitelists per context. Also supports the idea of loading multiple whitelists from different sources for a single context.
* Add a version constant for 5.6.2 so that the 5.6.1 constant
represents the 5.6.1 release and the 5.6.2 constant represents
the unreleased 5.6 branch.
Today we can't validate the array length in `InputStreamStreamInput` since
we can't rely on `InputStream.available` yet in some situations we know
the size of the stream and can apply additional validation.
After recovery completes from a primary, we now update the local
knowledge on the primary of the global checkpoint on the recovery
target. However if this occurs concurrently with a relocation, an
assertion could trip that we are no longer in primary mode. As this
local knowledge should only be tracked when we are in primary mode,
updating this local knowledge should be done under a permit. This commit
causes that to be the case.
Relates #26666
When checking that the global checkpoint on the primary is consistent
with the local checkpoints of the in-sync shards, we have to filter
pre-6.0 nodes from the check or the invariant will trivially trip. This
commit filters these nodes out when checking this invariant.
Relates #26666
This commit adds a skip for the bad request REST test on pre-6.0
nodes. Previously, a request for /_(.*) where $1 is not an existing
endpoint would return a 404. This is because the request would be
treated as a get index request for an index named _$1. However, an index
can never start with "_" so logic was added to detect this and return a
400 instead as this should be treated as a bad request. During the
mixed-cluster BWC tests, a node running pre-6.0 code will still return a
404 though. Therefore, this test needs to skipped in such a
mixed-cluster scenario.
This commit reenables the BWC tests after they were disabled for
backporting the change to track global checkpoints of shard copies on
the primary.
Relates #26666
This commit adds local tracking of the global checkpoints on all shard
copies when a global checkpoint tracker is operating in primary
mode. With this, we relay the global checkpoint on a shard copy back to
the primary shard during replication operations. This serves as another
step towards adding a background sync of the global checkpoint to the
shard copies.
Relates #26666
In this test, 260b is replaced by the regexp \d+b
but the test sometimes produces results like 1.1kb
so this commit adapts the regexp to match values
with decimals
The discovery-file plugin was not config path aware, so it always picked
up the default config path (from Elasticsearch home) rather than a
custom config path. This commit fixes the discovery-file plugin to
respect a custom config path.
Relates #26662
This commit adds validation to the resolving of indexes in the wildcard
expression resolver. It no longer throws a 404 Not Found when resolving
invalid indices. It throws a 400 instead, as it is an invalid
index. This was the behavior of 5.x.