#27409 deprecated the incorrectly-spelled `levenstein` in favour of `levenshtein`.
#27526 deprecated the inconsistent `jarowinkler` in favour of `jaro_winkler`.
These changes were merged into 6.2, and this change removes them entirely in 7.0.
* CustomFieldQuery: removed a redundant type check that was
already done higher up in the same if/else chain.
* PrioritizedEsThreadPoolExecutor: removed a check that was
simply a duplicate of one earlier one and would never have been true.
The current code contains an instanceOf check and a comment that this should
eventually be changed to something else. The typeName() should return a unique
name for the field type in question (geo_shape) so it can be used instead.
This commit addresses slowness in the test that a rejected execution
contains the node name. The slowness came from setting the count on a
countdown latch too high (two in the case of the search thread pool)
where there would never be a second countdown on the latch. This means
that when then test node is shutting down, closing the node would have
to wait a full ten seconds before forcefully terminating the thread
pool. This commit fixes the issue so that the node can close
immediately, shaving ten seconds off the run time of the test.
Relates #27663
This commit fixes the test of an index with an unknown setting. The
problem here is that we were manipulating the index state on disk, but a
cluster state update could arrive between us manipulating the index
state on disk and us restarting the node, leading to the index state
that we just intentionally broke being fixed. As such, after restart,
the index state would not be in the state that we expected it to be in
and the test would fail. To address this, we hook into the restart and
break the index state immediately before the node is started again.
Relates #26995
This commit attempts to continue unifying the logic between different
transport implementations. As transports call a `TcpTransport` callback
when a new channel is accepted, there is no need to internally track
channels accepted. Instead there is a set of accepted channels in
`TcpTransport`. This set is used for metrics and shutting down channels.
Today we are lenient and we open an index if it has broken
settings. This can happen if a user installs a plugin that registers an
index setting, creates an index with that setting, stop their node,
removes the plugin, and then restarts the node. In this case, the index
will have a setting that we do not recognize yet we open the index
anyway. This leniency is dangerous so this commit removes it. Note that
we still are lenient on upgrades and we should really reconsider this in
a follow-up.
Relates #26995
Setting a timeout here speeds the test up significantly since we do not
need to wait up the default of 30 seconds for shards to start, we only
need an ACK that the index was opened.
This is related to #27563. This commit modifies the
InboundChannelBuffer to support releasable byte pages. These byte
pages are provided by the PageCacheRecycler. The PageCacheRecycler
must be passed to the Transport with this change.
We have some methods Strings#splitStringByCommaToArray and
Strings#splitStringByCommaToSet. It is not obvious that the former
leaves whitespace and the latter trims it. We also have
Strings#tokenizeToStringArray which tokenizes a string to an array, and
trims whitespace. It seems the right thing to do here is to rename
Strings#splitStringByCommaToSet to Strings#tokenizeByCommaToSet so that
its name is aligned with another method that tokenizes by a delimiter
and trims whitespace. We also cleanup the code here, removing an
unneeded splitting by delimiter to set method.
Relates #27715
We currently do not have any server-side read timeouts implemented in
elasticsearch. This commit adds a read timeout setting that defaults to
30 seconds. If after 30 seconds a read has not occurred, the channel
will be closed. A timeout of value of 0 will disable the timeout.
The problem here is that splitting was using a method that intentionally
trims whitespace (the method is really meant to be used for splitting
parameters where whitespace should be trimmed like list
settings). However, for routing values whitespace should not be trimmed
because we allow routing with leading and trailing spaces. This commit
switches the parsing of these routing values to a method that does not
trim whitespace.
Relates #27712
It's possible that a merge may be ongoing when we check the breaker and segment
stats' memory usage, this causes the test to fail. Instead, we should wait for
merging to complete.
Resolves#27651
This test periodically fails if the nodes that apply the cluster state fail to ack the change within 100ms. This commit changes the checks on the test so that
it still checks that the open command has taken effect, but that the wait for active shards has actually failed.
This is related to #27563. In order to interface with java nio, we must
have buffers that are compatible with ByteBuffer. This commit introduces
a basic ByteBufferReference to easily allow transferring bytes off the
wire to usage in the application.
Additionally it introduces an InboundChannelBuffer. This is a buffer
that can internally expand as more space is needed. It is designed to
be integrated with a page recycler so that it can internally reuse pages.
The final piece is moving all of the index work for writing bytes to a
channel into the WriteOperation.
This commit adds a new dynamic cluster setting named `search.max_buckets` that can be used to limit the number of buckets created per shard or by the reduce phase. Each multi bucket aggregator can consume buckets during the final build of the aggregation at the shard level or during the reduce phase (final or not) in the coordinating node. When an aggregator consumes a bucket, a global count for the request is incremented and if this number is greater than the limit an exception is thrown (TooManyBuckets exception).
This change adds the ability for multi bucket aggregator to "consume" buckets in the global limit, the default is 10,000. It's an opt-in consumer so each multi-bucket aggregator must explicitly call the consumer when a bucket is added in the response.
Closes#27452#26012
Index settings didn't support reset by wildcard which also causes
issues like #27537 where archived settings can't be reset. This change
adds support for wildcards like `archived.*` to be used to reset setting to their
defaults or remove them from an index.
Closes#27537
The mappings can be submitted wrapped in a type object or not. They need to be returned in the same way as they were submitted. When applying field filters, we need to make sure that the format is preserved. MappingMetaData#getSourceAsMap removes the root level if it's the type object, which would make us overwrite the original mappings with filtered mappings but without the original root object.
Closes#27678
This commit restricts settings added to the keystore to have a lowercase
ascii name. The java Keystore javadocs state that case sensitivity of
key alias names are implementation dependent. This ensures regardless of
case sensitivity in a jvm implementation, the keys will be stored as we
expect.
Today, we prevent the system from storing a broken index template in the
transport layer, however we don't prevent this in XContent. A broken
index template can break the whole cluster state.
This commit attempts to prevent the system from constructing an index
template without a proper index patterns.
Add support for filtering fields returned as part of mappings in get index, get mappings, get field mappings and field capabilities API.
Plugins can plug in their own function, which receives the index as argument, and return a predicate which controls whether each field is included or not in the returned output.
This commit adds the node name to the names of thread pool executors so
that the node name is visible in rejected execution exception messages.
Relates #27663
The main constructor for rejected execution exception its executor
shutdown constructor parameter to the super constructor where it would
be used as a formatting parameter. This is a mistake so this commit
fixes this issue.
In the global checkpoint sync action, we fsync the translog. However,
the last synced global checkpoint might already be equal to the current
global checkpoint in which case the fsyncing the translog is unnecessary
as either the sync needed guard in the translog will skip the translog,
or the translog needs an fsync for another reason that will be picked up
elsewhere (e.g., at the end of a bulk request).
Relates #27652
The hashCode contract states that equal objects must have equal hash
codes, however the unequal objects are not required to have unequal
hashCodes.
This commit rewrites GeoPointParsingTests#testEqualsHashCodeContract
using#checkEqualsAndHashCode helper.
Closes#27633
* Fix highlighting on a keyword field that defines a normalizer
The `plain` and sometimes the `unified` highlighters need to re-analyze the content to highlight a field
This change makes sure that we don't ignore the normalizer defined on the keyword field for this analysis.
After write operations in some situations we fire a post-operation
global checkpoint sync. The global checkpoint sync unconditionally
fsyncs the translog and this can then look like an fsync
per-request. This violates the translog durability settings on the index
if this durability is set to async. This commit changes the global
checkpoint sync to observe the translog durability.
Relates #27641
Today we exclude internal refreshes in the refresh stats. Yet, it's very much
confusing to not take these into account. This change includes internal refreshes
into the stats until we have a dedicated stats for this.