This commit moves GlobalCheckpointTracker from the engine to IndexShard, where it better fits logically: Tracking the global checkpoint based on the local checkpoints of all shards in the replication group is not a property of the engine, but rather a property fulfilled by the current primary shard. The LocalCheckpointTracker on the other hand is driven by the contents of the local translog. By moving GlobalCheckpointTracker to IndexShard, it makes little sense to keep the SequenceNumbersService class around - it would only wrap the LocalCheckpointTracker. This commit therefore removes the class and replaces occurrences of SequenceNumbersService in the engine directly by LocalCheckpointTracker.
This commit fixes the version tests for release tests. The problem here
is that during release tests all version should be treated as released
so the assertions must be modified accordingly.
Relates #27815
When snapshotting the primary we capture a lucene commit at an arbitrary moment from a sequence number perspective. This means that it is possible that the commit misses operations and that there is a gap between the local checkpoint in the commit and the maximum sequence number.
When we restore, this will create a primary that "misses" operations and currently will mean that the sequence number system is stuck (i.e., the local checkpoint will be stuck). To fix this we should fill in gaps when we restore, in a similar fashion to normal store recovery.
Normally the hole is assigned to the component of the first edge to the south
of one of its vertices, but if the chosen hole vertex is south of everything
then the binary search returns -1 yielding an ArrayIndexOutOfBoundsException.
Instead, assign the vertex to the component of the first edge to its north.
Subsequent validation catches the fact that the hole is outside its component.
Fixes#25933
This commit moves the range field mapper back to core so that we can
remove the compile-time dependency of percolator on mapper-extras which
compilcates dependency management for the percolator client JAR, and
modules should not be intertwined like this anyway.
Relates #27854
This commit addresses the publication of the elasticsearch-cli to
Maven. For now for simplicity we publish this to Maven so that it is
available as a transitive dependency for any artifacts that depend on
the core elasticsearch artifact. It is possible that in the future we
can simply exclude this dependency but for now this is the safest and
simplest approach that can happen in a patch release.
Relates #27853
Today we use the in-memory global checkpoint from SequenceNumbersService
to clean up unneeded commit points, however the latest global checkpoint
may haven't fsynced to the disk yet. If the translog checkpoint fsync
failed and we already use a higher global checkpoint to clean up commit
points, then we may have removed a safe commit which we try to keep for
recovery.
This commit updates the deletion policy using lastSyncedGlobalCheckpoint
from Translog rather the in memory global checkpoint.
Relates #27606
Today we still maintain a version map even if we only index append-only
or in other words, documents with auto-generated IDs. We can instead maintain
an un-safe version map that will be swapped to a safe version map only if necessary
once we see the first document that requires access to the version map. For instance:
* a auto-generated id retry
* any kind of deletes
* a document with a foreign ID (non-autogenerated
In these cases we forcefully refresh then internal reader and start maintaining
a version map until such a safe map wasn't necessary for two refresh cycles.
Indices / shards that never see an autogenerated ID document will always meintain a version
map and in the case of a delete / retry in a pure append-only index the version map will be
de-optimized for a short amount of time until we know it's safe again to swap back. This
will also minimize the requried refeshes.
Closes#19813
Allowing `_doc` as a type will enable users to make the transition to 7.0
smoother since the index APIs will be `PUT index/_doc/id` and `POST index/_doc`.
This also moves most of the documentation to `_doc` as a type name.
Closes#27750Closes#27751
* Fixes ByteSizeValue to serialise correctly
This fix makes a few fixes to ByteSizeValue to make it possible to perform round-trip serialisation:
* Changes wire serialisation to use Zlong methods instead of VLong methods. This is needed because the value `-1` is accepted but previously if `-1` is supplied it cannot be serialised using the wire protocol.
* Limits the supplied size to be no more than Long.MAX_VALUE when converted to bytes. Previously values greater than Long.MAX_VALUE bytes were accepted but would be silently interpreted as Long.MAX_VALUE bytes rather than erroring so the user had no idea the value was not being used the way they had intended. I consider this a bug and so fine to include this bug fix in a minor version but I am open to other points of view.
* Adds a `getStringRep()` method that can be used when serialising the value to JSON. This will print the bytes value if the size is positive, `”0”` if the size is `0` and `”-1”` if the size is `-1`.
* Adds logic to detect fractional values when parsing from a String and emits a deprecation warning in this case.
* Modifies hashCode and equals methods to work with long values rather than doubles so they don’t run into precision problems when dealing with large values. Previous to this change the equals method would not detect small differences in the values (e.g. 1-1000 bytes ranges) if the actual values where very large (e.g. PBs). This was due to the values being in the order of 10^18 but doubles only maintaining a precision of ~10^15.
Closes#27568
* Fix bytes settings default value to not use fractional values
* Fixes test
* Addresses review comments
* Modifies parsing to preserve unit
This should be bwc since in the case that the input is fractional it reverts back to the old method of parsing it to the bytes value.
* Addresses more review comments
* Fixes tests
* Temporarily changes version check to 7.0.0
This will be changed to 6.2 when the fix has been backported
The CountedBitSet can automatically release its internal bitsets when
all bits are set to reduce memory usage. This structure can work well
for sequence numbers as these numbers are likely to form contiguous
ranges. This commit replaces FixedBitSet by CountedBitSet in
LocalCheckpointTracker.
The test testWithRandomException was not updated accordingly to the
latest translog policy. Method setTranslogGenerationOfLastCommit should
be called before whenever setMinTranslogGenerationForRecovery is called.
Relates #27606
We need to keep index commits and translog operations up to the current
global checkpoint to allow us to throw away unsafe operations and
increase the operation-based recovery chance. This is achieved by a new
index deletion policy.
Relates #10708
This commit changes the RestoreService so that it now fails the snapshot
restore if one of the shards to restore has failed to be allocated. It also adds
a new RestoreInProgressAllocationDecider that forbids such shards to be
allocated again. This way, when a restore is impossible or failed too many
times, the user is forced to take a manual action (like deleting the index
which failed shards) in order to try to restore it again.
This behaviour has been implemented because when the allocation of a
shard has been retried too many times, the MaxRetryDecider is engaged
to prevent any future allocation of the failed shard. If it happens while
restoring a snapshot, the restore hanged and was never completed because
it stayed around waiting for the shards to be assigned (and that won't happen).
It also blocked future attempts to restore the snapshot again. With this commit,
the restore does not hang and is marked as failed, leaving failed shards
around for investigation.
This is the second part of the #26865 issue.
Closes#26865
This pull request changes the S3BlobContainer.blobExists() method implementation
to make it use the AmazonS3.doesObjectExist() method instead of
AmazonS3.getObjectMetadata(). The AmazonS3 implementation takes care of
catching any thrown AmazonS3Exception and compares its response code with 404,
returning false (object does not exist) or lets the exception be propagated.
Previously we would unidle a primary shard during recovery in case the
recovery target would miss a background global checkpoint sync. However,
the background global checkpoint syncs are no longer tied to the primary
shard falling idle and so this unidling is no longer needed.
Relates #27757
This removes special casing for documents without a sequence ID.
This code is complex enough with seq IDs we should clean up things
when we can and we don't support 5.x indexing in 7.x anymore
The performance of this method is abysmal, it leads to the
balanced/unbalanced cluster tests taking twenty seconds! The reason for
the performance issue is a quadruple-nested for loop. The inner
double-nested loop is partitioning shards by shard ID in disguise, so we
simply extract this into computing a partition of shards by shard ID
once. Now balanced/unbalanced cluster test does not take twenty seconds
to run.
Relates #27747
#27409 deprecated the incorrectly-spelled `levenstein` in favour of `levenshtein`.
#27526 deprecated the inconsistent `jarowinkler` in favour of `jaro_winkler`.
These changes were merged into 6.2, and this change removes them entirely in 7.0.
* CustomFieldQuery: removed a redundant type check that was
already done higher up in the same if/else chain.
* PrioritizedEsThreadPoolExecutor: removed a check that was
simply a duplicate of one earlier one and would never have been true.
The current code contains an instanceOf check and a comment that this should
eventually be changed to something else. The typeName() should return a unique
name for the field type in question (geo_shape) so it can be used instead.
This commit addresses slowness in the test that a rejected execution
contains the node name. The slowness came from setting the count on a
countdown latch too high (two in the case of the search thread pool)
where there would never be a second countdown on the latch. This means
that when then test node is shutting down, closing the node would have
to wait a full ten seconds before forcefully terminating the thread
pool. This commit fixes the issue so that the node can close
immediately, shaving ten seconds off the run time of the test.
Relates #27663
This commit fixes the test of an index with an unknown setting. The
problem here is that we were manipulating the index state on disk, but a
cluster state update could arrive between us manipulating the index
state on disk and us restarting the node, leading to the index state
that we just intentionally broke being fixed. As such, after restart,
the index state would not be in the state that we expected it to be in
and the test would fail. To address this, we hook into the restart and
break the index state immediately before the node is started again.
Relates #26995
This commit attempts to continue unifying the logic between different
transport implementations. As transports call a `TcpTransport` callback
when a new channel is accepted, there is no need to internally track
channels accepted. Instead there is a set of accepted channels in
`TcpTransport`. This set is used for metrics and shutting down channels.
Today we are lenient and we open an index if it has broken
settings. This can happen if a user installs a plugin that registers an
index setting, creates an index with that setting, stop their node,
removes the plugin, and then restarts the node. In this case, the index
will have a setting that we do not recognize yet we open the index
anyway. This leniency is dangerous so this commit removes it. Note that
we still are lenient on upgrades and we should really reconsider this in
a follow-up.
Relates #26995
Setting a timeout here speeds the test up significantly since we do not
need to wait up the default of 30 seconds for shards to start, we only
need an ACK that the index was opened.
This is related to #27563. This commit modifies the
InboundChannelBuffer to support releasable byte pages. These byte
pages are provided by the PageCacheRecycler. The PageCacheRecycler
must be passed to the Transport with this change.
We have some methods Strings#splitStringByCommaToArray and
Strings#splitStringByCommaToSet. It is not obvious that the former
leaves whitespace and the latter trims it. We also have
Strings#tokenizeToStringArray which tokenizes a string to an array, and
trims whitespace. It seems the right thing to do here is to rename
Strings#splitStringByCommaToSet to Strings#tokenizeByCommaToSet so that
its name is aligned with another method that tokenizes by a delimiter
and trims whitespace. We also cleanup the code here, removing an
unneeded splitting by delimiter to set method.
Relates #27715
We currently do not have any server-side read timeouts implemented in
elasticsearch. This commit adds a read timeout setting that defaults to
30 seconds. If after 30 seconds a read has not occurred, the channel
will be closed. A timeout of value of 0 will disable the timeout.
The problem here is that splitting was using a method that intentionally
trims whitespace (the method is really meant to be used for splitting
parameters where whitespace should be trimmed like list
settings). However, for routing values whitespace should not be trimmed
because we allow routing with leading and trailing spaces. This commit
switches the parsing of these routing values to a method that does not
trim whitespace.
Relates #27712
It's possible that a merge may be ongoing when we check the breaker and segment
stats' memory usage, this causes the test to fail. Instead, we should wait for
merging to complete.
Resolves#27651
This test periodically fails if the nodes that apply the cluster state fail to ack the change within 100ms. This commit changes the checks on the test so that
it still checks that the open command has taken effect, but that the wait for active shards has actually failed.
This is related to #27563. In order to interface with java nio, we must
have buffers that are compatible with ByteBuffer. This commit introduces
a basic ByteBufferReference to easily allow transferring bytes off the
wire to usage in the application.
Additionally it introduces an InboundChannelBuffer. This is a buffer
that can internally expand as more space is needed. It is designed to
be integrated with a page recycler so that it can internally reuse pages.
The final piece is moving all of the index work for writing bytes to a
channel into the WriteOperation.
This commit adds a new dynamic cluster setting named `search.max_buckets` that can be used to limit the number of buckets created per shard or by the reduce phase. Each multi bucket aggregator can consume buckets during the final build of the aggregation at the shard level or during the reduce phase (final or not) in the coordinating node. When an aggregator consumes a bucket, a global count for the request is incremented and if this number is greater than the limit an exception is thrown (TooManyBuckets exception).
This change adds the ability for multi bucket aggregator to "consume" buckets in the global limit, the default is 10,000. It's an opt-in consumer so each multi-bucket aggregator must explicitly call the consumer when a bucket is added in the response.
Closes#27452#26012
Index settings didn't support reset by wildcard which also causes
issues like #27537 where archived settings can't be reset. This change
adds support for wildcards like `archived.*` to be used to reset setting to their
defaults or remove them from an index.
Closes#27537
The mappings can be submitted wrapped in a type object or not. They need to be returned in the same way as they were submitted. When applying field filters, we need to make sure that the format is preserved. MappingMetaData#getSourceAsMap removes the root level if it's the type object, which would make us overwrite the original mappings with filtered mappings but without the original root object.
Closes#27678
This commit restricts settings added to the keystore to have a lowercase
ascii name. The java Keystore javadocs state that case sensitivity of
key alias names are implementation dependent. This ensures regardless of
case sensitivity in a jvm implementation, the keys will be stored as we
expect.
Today, we prevent the system from storing a broken index template in the
transport layer, however we don't prevent this in XContent. A broken
index template can break the whole cluster state.
This commit attempts to prevent the system from constructing an index
template without a proper index patterns.
Add support for filtering fields returned as part of mappings in get index, get mappings, get field mappings and field capabilities API.
Plugins can plug in their own function, which receives the index as argument, and return a predicate which controls whether each field is included or not in the returned output.
This commit adds the node name to the names of thread pool executors so
that the node name is visible in rejected execution exception messages.
Relates #27663
The main constructor for rejected execution exception its executor
shutdown constructor parameter to the super constructor where it would
be used as a formatting parameter. This is a mistake so this commit
fixes this issue.
In the global checkpoint sync action, we fsync the translog. However,
the last synced global checkpoint might already be equal to the current
global checkpoint in which case the fsyncing the translog is unnecessary
as either the sync needed guard in the translog will skip the translog,
or the translog needs an fsync for another reason that will be picked up
elsewhere (e.g., at the end of a bulk request).
Relates #27652
The hashCode contract states that equal objects must have equal hash
codes, however the unequal objects are not required to have unequal
hashCodes.
This commit rewrites GeoPointParsingTests#testEqualsHashCodeContract
using#checkEqualsAndHashCode helper.
Closes#27633
* Fix highlighting on a keyword field that defines a normalizer
The `plain` and sometimes the `unified` highlighters need to re-analyze the content to highlight a field
This change makes sure that we don't ignore the normalizer defined on the keyword field for this analysis.
After write operations in some situations we fire a post-operation
global checkpoint sync. The global checkpoint sync unconditionally
fsyncs the translog and this can then look like an fsync
per-request. This violates the translog durability settings on the index
if this durability is set to async. This commit changes the global
checkpoint sync to observe the translog durability.
Relates #27641
Today we exclude internal refreshes in the refresh stats. Yet, it's very much
confusing to not take these into account. This change includes internal refreshes
into the stats until we have a dedicated stats for this.
This new snapshot mostly brings a change to TopFieldCollector which can now
early terminate collection when trackTotalHits is `false`.
As a follow-up, we should replace our usage of
`EarlyTerminatingSortingCollector` with this new option.
Today, we maintain two sets in a SeqNoSet: ongoing sets and completed
sets. We can remove the completed sets and use only the ongoing sets by
releasing the internal bitset of a CountedBitSet when all its bits are
set. This behaves like two sets but simpler. This commit also makes
CountedBitSet as a drop-in replacement for BitSet.
Relates #27268
* Add accounting circuit breaker and track segment memory usage
This commit adds a new circuit breaker "accounting" that is used for tracking
the memory usage of non-request-tied memory users. It also adds tracking for the
amount of Lucene segment memory used by a shard as a user of the new circuit
breaker.
The Lucene segment memory is updated when the shard refreshes, and removed when
the shard relocates away from a node or is deleted. It should also be noted that
all tracking for segment memory uses `addWithoutBreaking` so as not to fail the
shard if a limit is reached.
The `accounting` breaker has a default limit of 100% and will contribute to the
parent breaker limit.
Resolves#27044
Today we carry on the size of the live version map to ensure that
we minimze rehashing. Yet, once we are idle or we can issue a sync-commit
we can resize it to defaults to free up memory.
Relates to #27516
Once a shard goes inactive we want the shard to be refreshed if
the refresh interval is default since we might hold on to unnecessary
segments and in the inactive case we stopped indexing and can release
old segments.
Relates to #27500
Add an index level setting `index.analyze.max_token_count` to control
the number of generated tokens in the _analyze endpoint.
Defaults to 10000.
Throw an error if the number of generated tokens exceeds this limit.
Closes#27038