Today we exclude internal refreshes in the refresh stats. Yet, it's very much
confusing to not take these into account. This change includes internal refreshes
into the stats until we have a dedicated stats for this.
This new snapshot mostly brings a change to TopFieldCollector which can now
early terminate collection when trackTotalHits is `false`.
As a follow-up, we should replace our usage of
`EarlyTerminatingSortingCollector` with this new option.
Today, we maintain two sets in a SeqNoSet: ongoing sets and completed
sets. We can remove the completed sets and use only the ongoing sets by
releasing the internal bitset of a CountedBitSet when all its bits are
set. This behaves like two sets but simpler. This commit also makes
CountedBitSet as a drop-in replacement for BitSet.
Relates #27268
* Add accounting circuit breaker and track segment memory usage
This commit adds a new circuit breaker "accounting" that is used for tracking
the memory usage of non-request-tied memory users. It also adds tracking for the
amount of Lucene segment memory used by a shard as a user of the new circuit
breaker.
The Lucene segment memory is updated when the shard refreshes, and removed when
the shard relocates away from a node or is deleted. It should also be noted that
all tracking for segment memory uses `addWithoutBreaking` so as not to fail the
shard if a limit is reached.
The `accounting` breaker has a default limit of 100% and will contribute to the
parent breaker limit.
Resolves#27044
Today we carry on the size of the live version map to ensure that
we minimze rehashing. Yet, once we are idle or we can issue a sync-commit
we can resize it to defaults to free up memory.
Relates to #27516
Once a shard goes inactive we want the shard to be refreshed if
the refresh interval is default since we might hold on to unnecessary
segments and in the inactive case we stopped indexing and can release
old segments.
Relates to #27500
Add an index level setting `index.analyze.max_token_count` to control
the number of generated tokens in the _analyze endpoint.
Defaults to 10000.
Throw an error if the number of generated tokens exceeds this limit.
Closes#27038
The ChecksumBlobStoreFormat.writeAtomic() method writes a blob using a
temporary name and then moves the blob to its final name. The move
operation can fail and in this case the temporary blob is deleted. If
this delete operation also fails, then the initial exception is lost.
This commit ensures that when something goes wrong during the move
operation the initial exception is kept and thrown, and if the delete
operation also fails then this additional exception is added
as a suppressed exception to the initial one.
Today when configuring the data paths for the environment, we set data
paths to either the specified path.data or default to data relative to
the Elasticsearch home. Yet if node.local_storage is false, data paths
do not even make sense. In this case, we should reject if path.data is
set, and instead of defaulting data paths to data relative to home, we
should set this to empty paths. This commit does this.
Relates #27587
today a refresh listener won't preserve the entire context ie. won't carry
on response headers etc. from the caller side. This change adds support for
stored contexts.
Today we only expose the external readers segments. Yet, from a statistics
perspective both internal and external segments are relevant. This commit
exposes the additional segments of the internal and external reader respectively.
A compressible bytes output stream is a stream output which supports a
reset method. However, compressible bytes output streams are unusual in
that the current implementation sometimes supports a reset (if the
stream is not compressed) and sometimes does not support a rest (if the
stream is compressed). This inconsistent behavior is puzzling and
instead we should simply always throw an unsupported operation
exception.
Relates #27564
The GlobalOrdinalsStringTermsAggregator.LowCardinality aggregator casts global
values to `GlobalOrdinalMapping`, even though the implementation of global
values is different when a `missing` value is configured.
This commit adds a new API that gives access to the ordinal remapping in order
to fix this problem.
The main highlight of this new snapshot is that it introduces the opportunity
for queries to opt out of caching. In case a query opts out of caching, not only
will it never be cached, but also no compound query that wraps it will be
cached.
Also include _type and _id for parent/child hits inside inner hits.
In the case of top_hits aggregation the nested search hits are
directly returned and are not grouped by a root or parent document, so
it is important to include the _id and _index attributes in order to know
to what documents these nested search hits belong to.
Closes#27053
Method TruncateTranslogIT#corruptTranslogFiles corrupts some random
existing *.tlog files in a translog directory. However, this may not
actually corrupt translog at all if it corrupts only tlog files which
are not referenced by the Checkpoint (eg. their translog generations are
smaller the Checkpoint).
This commit makes sure that we corrupt some tlog files which are
referenced by the Checkpoint.
Closes#27538
Running with the all permission java.security.AllPermission granted is
equivalent to disabling the security manager. This commit adds a
bootstrap check that forbids running with this permission granted.
Relates #27548
Compressible bytes output stream swallows exceptions that occur when
closing. This commit changes this behavior so that such exceptions
bubble up.
Relates #27542
Today we refresh automatically in the background by default very second.
This default behavior has a significant impact on indexing performance
if the refreshes are not needed.
This change introduces a notion of a shard being `search idle` which a
shard transitions to after (default) `30s` without any access to an
external searcher. Once a shard is search idle all scheduled refreshes
will be skipped unless there are any refresh listeners registered.
If a search happens on a `serach idle` shard the search request _park_
on a refresh listener and will be executed once the next scheduled refresh
occurs. This will also turn the shard into the `non-idle` state immediately.
This behavior is only applied if there is no explicit refresh interval set.
Currently, translog operations are read and processed one by one. This
may be a problem as stale operations in translogs may suddenly reappear
in recoveries. To make sure that stale operations won't be processed, we
read the translog files in a reverse order (eg. from the most recent
file to the oldest file) and only process an operation if its sequence
number was not seen before.
Relates to #10708
Any CLI commands that depend on core Elasticsearch might touch classes
(directly or indirectly) that depends on logging. If they do this and
logging is not configured, Log4j will dump status error messages to the
console. As such, we need to ensure that any such CLI command configures
logging (with a trivial configuration that dumps log messages to the
console). Previously we did this in the base CLI command but with the
refactoring of this class out of core Elasticsearch, we no longer
configure logging there (since we did not want this class to depend on
settings and logging). However, this meant for some CLI commands (like
the plugin CLI) we were no longer configuring logging. This commit adds
base classes between the low-level command and multi-command classes
that ensure that logging is configured. Any CLI command that depends on
core Elasticsearch should use this infrastructure to ensure logging is
configured. There is one exception to this: Elasticsearch itself because
it takes reponsibility into its own hands for configuring logging from
Elasticsearch settings and log4j2.properties. We preserve this special
status.
Relates #27523
Today if refresh is disabled the doc stats are not updated anymore.
In a bulk index scenario this might cause confusion since even if
we refresh internal readers etc. doc stats are never advancing.
This change cuts over to the internal reader that is refreshed outside
of the external readers refresh interval but always equally `fresh` or
`fresher` which will cause less confusion.
In a previous change, we locked down the classes that can exit by
specifying explicit classes rather than packages than can exit. Alas,
there was a bug in the sense that the class that we exit from in the
case of an uncaught exception is not
ElasticsearchUncaughtExceptionHandler but rather an anonymous nested
class of ElasticsearchUncaughtExceptionHandler. To address this, we
replace this anonymous class with a bonafide nested class
ElasticsearchUncaughtExceptionHandler$PrivilegedHaltAction. Note that if
we try to get this class name we have a $ in the middle of the string
which is a special regular expression character; as such, we have to
escape it.
Relates #27518
The commit looks harmless, unfortunately it can break the engine flush
scheduler and the translog rolling. Both `uncommittedOperations` and
`uncommittedSizeInBytes` are currently calculated based on the minimum
required generation for recovery rather than the translog generation of
the last index commit. This is not correct if other index commits are
reserved for snapshotting even though we are keeping the last index
commit only.
This reverts commit e95d18ec23.
Today we create a new concurrent hash map everytime we refresh
the internal reader. Under defaults this isn't much of a deal but
once the refresh interval is set to `-1` these maps grow quite large
and it can have a significant impact on indexing throughput. Under low
memory situations this can cause up to 2x slowdown. This change carries
over the map size as the initial capacity wich will be auto-adjusted once
indexing stops.
Closes#20498
During a scroll, if the search sort matches the index sort we use the sort values of the last doc returned by
the previous scroll to optimize the main query with a `SearchAfterSortedDocQuery`.
This query can "jump" directly to the first document that sorts after the provided sort values.
This optim is also applied if the search sort is a prefix of the index sort but this case throws an exception
because we use the index sort (instead of the search sort) to validate the sort values of the last document.
This change fixes this bug and adds a test for it.
Pull request #20220 added a change where the store files
that have the same name but are different from the ones in the
snapshot are deleted first before the snapshot is restored.
This logic was based on the `Store.RecoveryDiff.different`
set of files which works by computing a diff between an
existing store and a snapshot.
This works well when the files on the filesystem form valid
shard store, ie there's a `segments` file and store files
are not corrupted. Otherwise, the existing store's snapshot
metadata cannot be read (using Store#snapshotStoreMetadata())
and an exception is thrown
(CorruptIndexException, IndexFormatTooOldException etc) which
is later caught as the begining of the restore process
(see RestoreContext#restore()) and is translated into
an empty store metadata (Store.MetadataSnapshot.EMPTY).
This will make the deletion of different files introduced
in #20220 useless as the set of files will always be empty
even when store files exist on the filesystem. And if some
files are present within the store directory, then restoring
a snapshot with files with same names will fail with a
FileAlreadyExistException.
This is part of the #26865 issue.
There are various cases were some files could exist in the
store directory before a snapshot is restored. One that
Igor identified is a restore attempt that failed on a node
and only first files were restored, then the shard is allocated
again to the same node and the restore starts again (but fails
because of existing files). Another one is when some files
of a closed index are corrupted / deleted and the index is
restored.
This commit adds a test that uses the infrastructure provided
by IndexShardTestCase in order to test that restoring a shard
succeed even when files with same names exist on filesystem.
Related to #26865
The `delimited_payload_filter` is renamed to `delimited_payload`, the old name is
deprecated and should be replaced by `delimited_payload`.
Closes#21978
Today, we keep only the last index commit and use only it to calculate
the minimum required translog generation. This may no longer be correct
as we introduced a new deletion policy which keeps multiple index
commits. This change adjusts the CombinedDeletionPolicy so that it can
work correctly with a new index deletion policy.
Relates to #10708, #27367