This commit adds support for incremental top N reduction if the number of
expected shards in the search request is high enough. The changes here
also clean up more code in SearchPhaseController to make the separation
between values that are the same on each search result and values that
are per response. The reduced search phase result doesn't hold an arbitrary
result to obtain values like `from`, `size` or sort values which is now
cleanly encapsulated.
The refactoring in #23711 hardcoded version logic for replica to assume monotonic versions. Sadly that's wrong for `FORCE` and `VERSION_GTE`. Instead we should use the methods in VersionType to detect conflicts.
Note - once replicas use sequence numbers for out of order delivery, this logic goes away.
This commit upgrades the Log4j dependencies from version 2.7 to version
2.8.2. This release includes a fix for a case where Log4j could lose
exceptions in the presence of a security manager.
Relates #23995
The ExtrasFS filesystem creates extra directories when creating temp
directories during tests to ensure that Lucene does not care about extra
files. These extra files get in our way in the plugins service tests
because some of these tests are counting only on certain directories
existing. This commit suppresses the ExtrasFS filesystem for the plugins
service tests, and fixes a test that was passing for the wrong reason
(because of the existence of an extra directory from ExtrasFS).
This commit removes some leniency from the plugin service which skips
hidden files in the plugins directory. We really want to ensure the
integrity of the plugin folder, so hasta la vista leniency.
Relates #23982
This commit removes the "legacy" feature of secure settings, which setup
a parallel setting that was a fallback in the insecure
elasticsearch.yml. This was previously used to allow the new secure
setting name to be that of the old setting name, but is now not in use
due to other refactorings. It is much cleaner to just have all secure
settings use new setting names. If in the future we want to reuse the
previous setting name, once support for the insecure settings have been
removed, we can then rename the secure setting. This also adds a test
for the behavior.
This test was sporadically failing for the following reason:
- 4 nodes (nodes 0, 1, 2, and 3) running with `minimum_master_nodes` set to 3
- we stop 2 nodes (node 0 and 3)
- wait for cluster block to be in place on all nodes
- start 2 nodes (node 4 and node 5) and do a `prepareHealth().setWaitForNodes("4")`
- then do a search request
The search request runs into the `ClusterBlockException` as the `prepareHealth().setWaitForNodes("4")` check succeeds on a cluster state that has
nodes 1, 2, 3, and 4, i.e., only one of the two new nodes has joined the cluster and only one of the two dead nodes was removed by the master
(removing the dead nodes only happens after there are again `minimum_master_nodes` nodes in the cluster).
This commit fixes the issue by reusing a method from InternalTestCluster that checks that the right nodes have rejoined the cluster.
The test assumes that two nodes leaving the cluster results in two cluster state updates on the master, which is invalidated by cluster state
batching.
Shuffling xContent breaks the order of the highlighter fields in the
internal list if the highlighter doesn't use the array syntax. In other tests we
avoid shuffling this json level, but since this is done in the base test for
aggregations we should ensure the highlight builder uses the array syntax here.
The `getProperty` method is an internal method needed to run pipeline aggregations and retrieve info by path from the aggs tree. It is not needed in the `Aggregation` interface, which is returned to users running aggregations from the transport client. The method is moved to the InternalAggregation class as that's where it belongs.
ESTestCase has methods to shuffle xContent keys given a builder or a parser. Shuffling wasn't actually doing what was expected but rather reordering the keys in their natural ordering, hence the output was always the same at every run. Corrected that and added tests, also fixed a couple of tests that were affected by this fix.
If a snapshot is taken on multiple indices, and some of them are "good"
indices that don't contain any corruption or failures, and some of them
are "bad" indices that contain missing shards or corrupted shards, and
if the snapshot request is set to partial=false (meaning don't take a
snapshot if there are any failures), then the good indices will not be
snapshotted either. Previously, when getting the status of such a
snapshot, a 500 error would be thrown, because the snap-*.dat blob for
the shards in the good index could not be found.
This commit fixes the problem by reporting shards of good indices as
failed due to a failed snapshot, instead of throwing the
NoSuchFileException.
Closes#23716
This change disables graph analysis of token streams containing a shingle or a cjk filters that produce shingle or ngram of different size. The graph analysis is disabled for phrase and boolean queries.
Closes#23918
This commit modifies the BulkProcessor to be decoupled from the
client implementation. Instead it just takes a
BiConsumer<BulkRequest, ActionListener<BulkResponse>> that executes
the BulkRequest.
When executing an index operation on the primary shard,
`TransportShardBulkAction` first parses the document, sees if there are any
mapping updates that needs to be applied, and then updates the mapping on the
master node. It then re-parses the document to make sure that the mappings have
been applied and propagated.
This adds a check that skips the second parsing of the document in the event
there was not a mapping update applied in the first case.
Fixes a performance regression introduced in #23665
Today we have several code paths to merge top docs based on the number of
search results returned from the shards. If there is a only a single shard
holding any hits we go a different code path with quite some complexity while
if there are more than one the code is basically duplicated to safe the
creation of a dense array of top docs which can be large if there are many results.
This commit removes the need of the dense array and in-turn the justification for
the optimization. This commit introduces a single code path to merge top docs.
The InternalEngine Index/Delete methods (plus satellites like version loading from Lucene) have accumulated some cruft over the years making it hard to clearly the code flows for various use cases (primary indexing/recovery/replicas etc). This PR refactors those methods for better readability. The methods are broken up into smaller sub methods, albeit at the price of less code I reused.
To support the refactoring I have considerably beefed up the versioning tests.
This PR is a spin-off from #23543 , which made it clear this is needed.
The purpose of this validation is to make sure that the master doesn't step down
due to a change in master nodes, which also means that there is no way to revert
an accidental change. Since we validate using the current cluster state (and
not the one from which the settings come from) we have to be careful and only
validate if the local node is already a master. Doing so all the time causes
subtle issues. For example, a node that joins a cluster has no nodes in its
current cluster state. When it receives a cluster state from the master with
a dynamic minimum master nodes setting int it, we must make sure we don't reject it.
Closes#23695
SingleNodeDiscoveryIT uses a hardcoded port for the purpose of binding
two nodes within the limited port range that an unconfigured unicast zen
ping hosts list would try to discover another node on. This commit at
least removes this hardcoding for the first node to come up, although
still tries to bind the second node to the limited port range after the
first node has bound.
This commit makes closing a ReleasableBytesStreamOutput release the underlying BigArray so
that we can use try-with-resources with these streams and avoid leaking memory by not returning
the BigArray. As part of this change, the ReleasableBytesStreamOutput adds protection to only release the BigArray once.
In order to make some of the changes cleaner, the ReleasableBytesStream interface has been
removed. The BytesStream interface is changed to a abstract class so that we can use it as a
useable return type for a new method, Streams#flushOnCloseStream. This new method wraps a
given stream and overrides the close method so that the stream is simply flushed and not closed.
This behavior is used in the TcpTransport when compression is used with a
ReleasableBytesStreamOutput as we need to close the compressed stream to ensure all of the data
is written from this stream. Closing the compressed stream will try to close the underlying stream
but we only want to flush so that all of the written bytes are available.
Additionally, an error message method added in the BytesRestResponse did not use a builder
provided by the channel and instead created its own JSON builder. This changes that method to use the channel builder and in turn the bytes stream output that is managed by the channel.
This commit renames the random ASCII helper methods in ESTestCase. This
is because this method ultimately uses the random ASCII methods from
randomized runner, but these methods actually only produce random
strings generated from [a-zA-Z].
Relates #23886
This commit adds a description for a parameter that was added to
BootstrapChecks#enforceLimits(BoundTransportAddress, String) without the
Javadocs having been updated.
While there are use-cases where a single-node is in production, there
are also use-cases for starting a single-node that binds transport to an
external interface where the node is not in production (for example, for
testing the transport client against a node started in a Docker
container). It's tricky to balance the desire to always enforce the
bootstrap checks when a node might be in production with the need for
the community to perform testing in situations that would trip the
bootstrap checks. This commit enables some flexibility for these
users. By setting the discovery type to "single-node", we disable the
bootstrap checks independently of how transport is bound. While this
sounds like a hole in the bootstrap checks, the bootstrap checks can
already be avoided in the single-node use-case by binding only HTTP but
not transport. For users that are genuinely in production on a
single-node use-case with transport bound to an external use-case, they
can set the system property "es.enable.bootstrap.checks" to force
running the bootstrap checks. It would be a mistake for them not to do
this.
Relates #23598
This change adds a setting property that sets the value of a setting as final.
Updating a final setting is prohibited in any context, for instance an index setting
marked as final must be set at index creation and will refuse any update even if the index is closed.
This change also marks the setting `index.number_of_shards` as Final and the special casing for refusing the updates on this setting has been removed.
This commit adds a single node discovery type. With this discovery type,
a node will elect itself as master and never form a cluster with another
node.
Relates #23595
When terminating an executor service or a thread pool, we first
shutdown. Then, we do a timed await termination. If the await
termination fails because there are still tasks running, we then
shutdown now. However, this method does not wait for actively executing
tasks to terminate, so we should again wait for termination of these
tasks before returning. This commit does that.
Relates #23889
If a test touches ElasticsearchExceptionHandle before the class
initialzer for ElasticsearchException has run, a circular class
initialization problem can arise. Namely, the class initializer for
ElasticsearchExceptionHandle depends on the class initializer for
ElasticsearchExceptionHandle which depends on the class initializer for
all the classes that extend ElasticsearchException, but these classes
can not be loaded because ElasticsearchException has not finished its
class initializer. There are tests that can trigger this before
ElasticsearchException has been loaded due to an unlucky ordering of
test execution. This commit addresses this issue by making
ElasticsearchExceptionHandle private, and then exposing methods that
provide the necessary values from ElasticsearchExceptionHandle. Touching
these methods will force the class initializer for
ElasticsearchException to run first.
Currently for field sorting we always use a custom sort field and a custom comparator source.
Though for numeric fields this custom sort field could be replaced with a standard SortedNumericSortField unless
the field is nested especially since we removed the FieldData for numerics.
We can also use a SortedSetSortField for string sort based on doc_values when the field is not nested.
This change replaces IndexFieldData#comparatorSource with IndexFieldData#sortField that returns a Sorted{Set,Numeric}SortField when possible or a custom
sort field when the field sort spec is not handled by the SortedSortFields.
Today we prevent nodes from joining when indices exists that are too old.
Yet, the opposite can happen too since lucene / elasticsearch is not forward
compatible when it gets to indices we won't let nodes join the cluster once
there are indices in the clusterstate that are newer than the nodes version.
This prevents forward compatibility issues which we never test against. Yet,
this will not prevent rolling restarts or anything like this since indices
are always created with the minimum node version in the cluster such that an index
can only get the version of the higher nodes once all nodes are upgraded to this version.
TopDocs et.al. got additional parameters to incrementally reduce
top docs. In order to add incremental reduction `CollapseTopFieldDocs`
needs to have the same properties.
Remote nodes in cross-cluster search can be marked as eligible for
acting a gateway node via a remote node attribute setting. For example,
if search.remote.node.attr is set to "gateway", only nodes that have
node.attr.gateway set to "true" can be connected to for cross-cluster
search. Unfortunately, there is a bug in the handling of these
attributes due to the use of a dangerous method
Boolean#getBoolean(String) which obtains the system property with
specified name as a boolean. We are not looking at system properties
here, but node settings. This commit fixes this situation, and adds a
test. A follow-up will ban the use of Boolean#getBoolean.
Relates #23863
This commit changes the ClusterStatsNodes.NetworkTypes so that is does
not print out empty field names when no Transport or HTTP type is defined:
```
{
"network_types": {
...
"http_types": {
"": 2
}
}
}
```
is now rendered as:
```
{
"network_types": {
...
"http_types": {
}
}
}
```
SearchPhaseController is tighly coupled to AtomicArray which makes
non-dense representations of results very difficult. This commit removes
the coupling and cuts over to Collection rather than List to ensure no
order or random access lookup is implied.
This change introduces a new API called `_field_caps` that allows to retrieve the capabilities of specific fields.
Example:
````
GET t,s,v,w/_field_caps?fields=field1,field2
````
... returns:
````
{
"fields": {
"field1": {
"string": {
"searchable": true,
"aggregatable": true
}
},
"field2": {
"keyword": {
"searchable": false,
"aggregatable": true,
"non_searchable_indices": ["t"]
"indices": ["t", "s"]
},
"long": {
"searchable": true,
"aggregatable": false,
"non_aggregatable_indices": ["v"]
"indices": ["v", "w"]
}
}
}
}
````
In this example `field1` have the same type `text` across the requested indices `t`, `s`, `v`, `w`.
Conversely `field2` is defined with two conflicting types `keyword` and `long`.
Note that `_field_caps` does not treat this case as an error but rather return the list of unique types seen for this field.
Today we have no way to mark an execution as internal. This commit adds
a simple thread context header that allows executing code in a system context.
This allows intercepting code can make better decisions down the road when
it gets to authentication.
This commit changes the listener passed to sendMessage from a Runnable
to a ActionListener.
This change also removes IOException from the sendMessage signature.
That signature is misleading as it allows implementers to assume an
exception will be thrown in case of failure. That does not happen due
to Netty's async nature.
When executing an update request, the request timeout is not transferred
to the index/delete request executed on behalf of the update
request. This leads to update requests not timing out when they should
(e.g., if not all shards are available when the request specifies
wait_for_shards=all with a small timeout). This commit causes the
index/delete requests to honor the update request timeout.
Relates #23825
Today we have the shard target and the target request ID available in SearchPhaseResults.
Yet, the coordinating node maintains a shard index to reference the request, response tuples
internally which is also used in many other classes to reference back from fetch results to
query results etc. Today this shard index is implicitly passed via the index in AtomicArray
which causes an undesirable dependency on this interface.
This commit moves the shard index into the SearchPhaseResult and removes some dependencies
on AtomicArray. Further removals will follow in the future. The most important refactoring here
is the removal of AtomicArray.Entry which used to be created for every element in the atomic array
to maintain the shard index during result processing. This caused an unnecessary indirection, dependency
and potentially thousands of unnecessary objects in every search phase.
I think this query should not use the hashCode provided BytesRef#hashCode().
It uses StringHelper#GOOD_FAST_HASH_SEED which is initialized in a static
block to System.currentTimeMillis().
Running this query on different replicas may return inconsistent results.
Using a fixed seed should guaranty that the docs are sliced consistently
accross replicas.
Fixes#23096
This moves `updateReplicaRequest` to `createPrimaryResponse` and separates the
translog updating to be a separate function so that the function purpose is more
easily understood (and testable).
It also separates the logic for `MappingUpdatePerformer` into two functions,
`updateMappingsIfNeeded` and `verifyMappings` so they don't do too much in a
single function. This allows finer-grained error testing for when a mapping
fails to parse or be applied.
Finally, it separates parsing and version validation for
`executeIndexRequestOnReplica` into a separate
method (`prepareIndexOperationOnReplica`) and adds a test for it.
Relates to #23359
The translog already occupies 43 bytes on disk when empty. If the
translog generation threshold is below this, the flush thread can get
stuck in an infinite loop repeatedly rolling the generation. This commit
adds a lower bound on the translog generation to avoid this problem,
however we keep the lower bound small for convenience in testing.
Relates #23779
This commit adds support for the pattern keyword marker filter in
Lucene. Previously, the keyword marker filter in Elasticsearch
supported specifying a keywords set or a path to a set of keywords.
This commit exposes the regular expression pattern based keyword marker
filter also available in Lucene, so that any token matching the pattern
specified by the `keywords_pattern` setting is excluded from being
stemmed by any stemming filters.
Closes#4877
As the query of a search request defaults to match_all,
calling _delete_by_query without an explicit query may
result in deleting all data.
In order to protect users against falling into that
pitfall, this commit adds a check to require the explicit
setting of a query.
Closes#23629
This commit adds the boolean similarity scoring from Lucene to
Elasticsearch. The boolean similarity provides a means to specify that
a field should not be scored with typical full-text ranking algorithms,
but rather just whether the query terms match the document or not.
Boolean similarity scores a query term equal to its query boost only.
Boolean similarity is available as a default similarity option and thus
a field can be specified to have boolean similarity by declaring in its
mapping:
"similarity": "boolean"
Closes#6731
Change the error response when using a non UTF timezone for range queries with epoch_millis
or epoch_second formats to an illegal argument exception. The goal is to provide a better
explanation of why the query has failed. The current behavior is to respond with a parse exception.
Closes#22621
Today, when parsing mget requests, we silently ignore keys in the top
level that do not match "docs" or "ids". This commit addresses this
situation by throwing an exception if any other key occurs here, and
providing the names of valid keys.
Relates #23746
This commit fixes the serialization for plugin info. Namely, the
serialization incorrectly specified the backwards compatibility version
as strictly after version 5.4.0, whereas it should be on or after
version 5.4.0.
This commit introduces a maximum size for a translog generation and
automatically rolls the translog when a generation exceeds the threshold
into a new generation. This threshold is configurable per index and
defaults to sixty-four megabytes. We introduce this constraint as
sequence numbers will require keeping around more than the current
generation (to ensure that we can rollback to the global
checkpoint). Without keeping the size of generations under control,
having to keep old generations around could consume excessive disk
space. A follow-up will enable commits to trim previous generations
based on the global checkpoint.
Relates #23606
The OpenJDK project provides early-access builds of upcoming
releases. These early-access builds are not suitable for
production. These builds sometimes end up on systems due to aggressive
packaging (e.g., Ubuntu). This commit adds a bootstrap check to ensure
these early-access builds are not being used in production.
Relates #23743
The hit object can be very small e.g. when using "stored_fields": ["_none_"],
this adds a test that checks that we can still parse back the object.
* also check type/id null
Removed `parse(String index, String type, String id, BytesReference source)` in DocumentMapper.java and replaced all of its use in Test files with `parse(SourceToParse source)`.
`parse(String index, String type, String id, BytesReference source)` was only used in test files and never in the main code so it was removed. All of the test files that used it was then modified to use `parse(SourceToParse source)` method that existing in DocumentMapper.java
This commit fixes an issue manifested in the
SharedClusterSnapshotRestoreIT#testGetSnapshotsRequest where a delete
request on a snapshot encounters an in-progress snapshot, so it first
tries to abort the snapshot. During the aborting process, an exception
is thrown which is handled by the snapshot listener's onSnapshotFailure
method. This method retries the delete snapshot request, only to
encounter that the snapshot is missing, throwing an exception. It is
possible that the snapshot failure resulted in the snapshot never having
been written to the repository, and hence, there is nothing to delete.
This commit handles the SnapshotMissingException by logging it and
notifying the listener of the missing snapshot.
Closes#23663
This is especially useful when we rewrite the query because the result of the rewrite can be very different on different shards. See #18254 for example.
The output of the different implementations of terms aggs is always very similar. The toXContent methods for each of those classes though was duplicating almost the same code multiple times. This commit centralizes the code for rendering XContent to a single place, which can be reused from the different terms aggs implementations.
We currently use POSIX exit codes in all of our CLIs. However, posix
only suggests these exit codes are standard across tools. It does not
prescribe particular uses for codes outside of that range. This commit
adds 2 exit codes specific to plugin installation to make distinguishing
an incorrectly built plugin and a plugin already existing easier.
closes#15295
This commit catches the underlying failure when trying to list plugin
information when a plugin is incompatible with the current version of
elasticsearch. This could happen when elasticsearch is upgraded but old
plugins still exist. With this change, all plugins will be output,
instead of failing at the first out of date plugin.
closes#20691
Changes reindex and friends to wait until the entire request has
been "cleaned up" before responding. "Clean up" in this context
is clearing the scroll and (for reindex-from-remote) shutting
down the client. Failures to clean up are still only logged, not
returned to the user.
Closes#23653
After the removal of the joda time hack we used to have, we can cleanup
the codebase handling in security, jarhell and plugins to be more picky
about uniqueness. This was originally in #18959 which was never merged.
closes#18959
The SharedClusterSnapshotRestoreIT#testDataFileCorruptionDuringRestore
would fail sporadically because it tried to simulate restoring a
corrupted index. The test would wait until the restore is finished (and
marked as failed) before exiting. However, in the background, the
cluster still continues to retry allocation of the failed shards,
despite the restore operation being marked as completed, which in turn
generates cluster states to process. The end of every ESIntegTestCase
verifies that none of the nodes have any pending cluster states to
process. Hence, this check sometimes fails on this particular test.
This commit solves the issue by ensuring the index is deleted before
exiting the test.
Search took time uses an absolute clock to measure elapsed time, and
then tries to deal with the complexities of using an absolute clock for
this purpose. Instead, we should use a high-precision monotonic relative
clock that is designed exactly for measuring elapsed time. This commit
modifies the search infrastructure to use a relative clock for measuring
took time, but still provides an absolute clock for the components of
search that require a real clock (e.g., index name expression
resolution, etc.).
Relates #23662
A better toString() is added for snapshot operations in progress in the
cluster state and logging has been increased to help debug
SharedClusterSnapshotRestoreIT tests.
MapperService#parentTypes is rewrapped in an UnmodifiableSet in MapperService#internalMerge every time the cluster state is updated. After thousands of updates the collection is wrapped so deeply that calling a method on it generates a StackOverflowError.
Closes#23604
When adding filesystem stats from individual filesystems, free and
available can overflow. This commit guards against this by adjusting
these situations to Long.MAX_VALUE.
Relates #23641