Since groovy was removed, we no longer have any ScriptEngines with
resources to release. We may want to keep the option open for a script
engine to close resources, but this would not be common. This commit
adds a default implementation to ScriptEngine for `close()` to reduce
the boiler plate that must be added for a ScriptEngine implementation.
This commit increases the logging level on the index and relocate
concurrently test to obtain some insight into the global checkpoint
moving backwards.
The current log tries make sure we waited some (but not too long). This is unpredictable and fails all the time. This commit removes all of it and just make sure that we throw the right exceptions after timing out.
Fixes#24369
* SignificantText aggregation - like significant_terms but doesn’t require fielddata=true, recommended used with `sampler` agg to limit expense of tokenizing docs and takes optional `filter_duplicate_text`:true setting to avoid stats skew from repeated sections of text in search results.
Closes#23674
With #24779 in place, we can now guaranteed that a single translog generation file will never have a sequence number conflict that needs to be resolved by looking at primary terms. These conflicts can a occur when a replica contains an operation which isn't part of the history of a newly promoted primary. That primary can then assign a different operation to the same slot and replicate it to the replica.
PS. Knowing that each generation file is conflict free will simplifying repairing these conflicts when we read from the translog.
PPS. This PR also fixes some bugs in the piping of primary terms in the bulk shard action. These bugs are a result of the legacy of IndexRequest/DeleteRequest being a ReplicationRequest. We need to change that as a follow up.
Relates to #10708
This commit cleans up tests which currently use custom script engine
implementations, converting them to use a MockScriptEngine with script
functions provided by the tests. It also creates a common set of metric
scripts which were copied across a couple metric agg tests.
A user reported uneven balancing of load on nodes handling search requests from Kibana which supplies a session ID in a routing preference. Each shardId was selecting the same node for a given session ID because one data node had all primaries and the other data node held all replicas after cluster startup.
This change counteracts the tendency to opt for the same node given the same user-supplied preference by incorporating shard ID in the hash of the preference key. This will help randomise node choices across shards.
Closes#24642
This commit adds comments to org.elasticsearch.Assertions that disables
IntelliJ from complaining about using assert with side-effects, and
using constant conditions there as the side-effect with a constant
condition is intentionally employed.
Today in the code base we have lots of ugly code blocks like:
boolean assertionsEnabled = false;
assert assertionsEnabled = true;
if (assertionsEnabled) {
// something
}
These are a nuisance. Instead, we can do this in exactly one place and
replace these blocks with
if (Assertions.ENABLED) {
// something
}
The cool thing here is that since this is a static final field, the JIT
can optimize away the check at runtime if assertions are disabled.
Relates #24834
This commit moves the handling of nested and parent/child inner hits to specialized classes that can be defined outside of ES core.
InnerHitBuilderContext is now used by the parent query (nested or hasChild, ...) to build the sub context from the InnerHitBuilder definition.
BWC is also ensured so that nodes in previous versions can still send/receive inner hits to/from this version.
Relates #20257
After releasing 5.3.2, the 5.3.3 version constant was created. However,
this causes issues for the rolling upgrade tests, which expect to have
all older versions artifacts published and no point releases created off
of the older versions (older meaning more than one version behind the
current version). This commit removes the 5.3.3 version constant,
assuming we will not need it anywhere.
As we work towards contexts implying the return type of compilation, we
first need ScriptContext to not be an enum. This commit removes the
Standard enum and Plugin subclass of ScriptContext.
This commit fixes the RangeFieldMapper and RangeQueryBuilder to pass the correct relation to the RangeQuery when performing a range query over range fields.
Currently a `delete document` request against a non-existing index actually **creates** this index.
With this change the `delete document` no longer creates the previously non-existing index and throws an `index_not_found` exception instead.
However as discussed in https://github.com/elastic/elasticsearch/pull/15451#issuecomment-165772026, if an external version is explicitly used, the current behavior is preserved and the index is still created and the document is marked for deletion.
Fixes#15425
This commit is a simple cleanup to remove an unnecessary extra method on
ScriptService which was only used in 3 places. There is now only one
search method.
ScriptEngine implementations have an overridable method to indicate they
are safe to use as inline scripts. Since groovy was removed fro 6.0,
there are no longer any implementations which used the default false
value. Furthermore, the value was not actually read anywhere. This
commit removes the method. The ScriptEngineRegistry was also no longer
necessary as it only was used to build a map from language to engine.
This commit removes a convenience method from index shard that is used
at exactly one call site. This method is used to callback a listener
when an operation is on too old of a primary term. Since it is only used
at one call site, we simply inline the method.
Today a replica learns of a new primary term via a cluster state update
and there is not a clean transition between the older primary term and
the newer primary term. This commit modifies this situation so that:
- a replica shard learns of a new primary term via replication
operations executed under the mandate of the new primary
- when a replica shard learns of a new primary term, it blocks
operations on older terms from reaching the engine, with a clear
transition point between the operations on the older term and the
operations on the newer term
This work paves the way for a primary/replica sync on primary
promotion. Future work will also ensure a clean transition point on a
promoted primary, and prepare a replica shard for a sync with the
promoted primary.
Relates #24779
Allows plugins to register pre-configured tokenizers. Much
of the decisions are the same as those in #24223, #24572,
and #24223. This only migrates the lowercase tokenizer but
I figure that is a good start because it proves out the features.
This change removes the field data specialization needed for the parent field and replaces it with
a simple DocValuesIndexFieldData. The underlying global ordinals are retrieved via a new function called
IndexOrdinalsFieldData#getOrdinalMap.
The children aggregation is also modified to use a simple WithOrdinals value source rather than the deleted WithOrdinals.Parent.
Relates #20257
Today when we get a metadata snapshot from the index shard we ensure
that if there is no engine started on the shard that we lock the index
writer before we go and fetch the store metadata. Yet, if we concurrently
recover that shard, recovery finalization might fail since it can't acquire
the IW lock on the directory. This is mainly due to the wrong order of aquiring
the IW lock and the metadata lock. Fetching store metadata without a started engine
should block on the metadata lock in Store.java but since IndexShard locks the writer
first we get into a failed recovery dance especially in test. In production
this is less of an issue since we rarely get into this siutation if at all.
Closes#24481
The method should rather advance one token and only then require a START_OBJECT as the current token. This allows to parse given a parser that's at the beginning of the response, where the initial/current token is null.
Now the Java High Level Rest Client has tests to parse all aggregations,
this test is not needed anymore. We have better tests like
AggregationsTests and sub classes of InternalAggregationTestCase.
Related to #23965
This commit moves some functionality from PublishClusterStateAction to ZenDiscovery, which allows each class to focus on it's core competencies:
- PendingStatesQueue is now solely managed by ZenDiscovery (no shared access by both PublishClusterStateAction and ZenDiscovery)
- Validation logic is handled exclusively by ZenDiscovery
This commit is a simple refactoring of the update shard logic for
primaries. Namely, there was some duplicated code here that was annoying
to have to read twice so it is now collapsed with this commit.
We have decided not to force a future version upgrade to deal with this todo. Rather, we'll keep the code until its in our way / the opportunity arises to deal with it.
Now that we generate the versions list from Versions.java we can
drop the list of versions maintained for vagrant testing. One nice
thing that the vagrant testing did was to check if the list of
versions was out of date. This moves that test to the core
project.
This PR revolves around places in the code where introducing a StringBuilder might make the construction
of a String easier to follow and also, maybe avoid a case where the compiler's very safe way of introducing
StringBuilder instead of String might not always be optimal for performance.
This fixes a bug in the 'date_histogram' aggregation that can happen when using 'extended_bounds'
together with some 'offset' parameter. Offsets should be applied after rounding the extended bounds
and also be applied when adding empty buckets during the reduce phase in InternalDateHistogram.
Closes#23776
Native scripts have been replaced in documentation by implementing
a ScriptEngine and they were deprecated in 5.5.0. This commit
removes the native script infrastructure for 6.0.
closes#19966
Shared settings were added intially to allow the few common settings
names across aws plugins. However, in 6.0 these settings have been
removed. The last use was in netty, but since 6.0 also has the netty 3
modules removed, there is no longer a need for the shared property. This
commit removes the shared setting property.
SearchResponse#fromXContent allows to parse a search response, including search hits, aggregations, suggestions and profile results. Only the aggs that we can parse today are supported (which means all of them but a couple that are left to support). SearchResponseTests reuses the existing test infra to randomize aggregations, suggestions and profile response.
Relates to #23331
This commit adds a new method to the SearchOperationListener that allows implementers to validate
the SearchContext immediately after it is retrieved from the active contexts. The listener may
throw a runtime exception if it deems the SearchContext is not valid and that the use of the context
should be terminated.
* [TEST] Fix TransportReplicationActionTests.testRetryOnReplica for replica request
We were improperly testing that it was a `ConcreteShardRequest` instead of a
`ConcreteReplicaRequest`. This adds that change and also ensures that the
checkpoint is retrievable from the request.
* Fix line-length
Approaching the release of 6.0 we need to sort out the usage of
`Version#minimumCompatibilityVersion` which was still set to 5.0.0.
Now this change moves it to the latest released version of 5.x (5.4 at this point)
to ensure we are compatible with the latest minor of the previous major. This change
also removes all the `_UNRELEASED` from the versions that where released and drops versions
that were never released and are not expected to be released (bugfixes in minors that are not
the latest in the previous major).
Today the `_field_caps` API doesn't implement its request serialization
correctly since indices and indices options are not serialized at all.
This will likely break with all transport clients etc. and if this request
must be send across the network. This commit fixes this and adds correct
handling if we have only remote indices to prevent the inclusion of
all local indices.
* Fix ArrayIndexOutOfBoundsException in Range Aggregation when no ranges are specified in the query
* Revert "Fix ArrayIndexOutOfBoundsException in Range Aggregation when no ranges are specified in the query"
This reverts commit ad57d8feb3577a64b37de28c6f3df96a3a49fe93.
* Fix range aggregation out of bounds exception when there are no ranges in a range or date_range query
* Fix range aggregation out of bounds exception when there are no ranges in the query
This fix is applied to range queries, date range queries, ip range queries and geo distance aggregation queries
Native scripts are no longer documented and instead using a ScriptEngine
is recommended. This change adds a deprecation warning for removal in
6.0.
relates #19966
This PR adds a new thread pool type: `fixed_auto_queue_size`. This thread pool
behaves like a regular `fixed` threadpool, except that every
`auto_queue_frame_size` operations (default: 10,000) in the thread pool,
[Little's Law](https://en.wikipedia.org/wiki/Little's_law) is calculated and
used to adjust the pool's `queue_size` either up or down by 50. A minimum and
maximum is taken into account also. When the min and max are the same value, a
regular fixed executor is used instead.
The `SEARCH` threadpool is changed to use this new type of thread pool. However,
the min and max are both set to 1000, meaning auto adjustment is opt-in rather
than opt-out.
Resolves#3890
Moves the remaining preconfigured token figured into the analysis-common module. There were a couple of tests in core that depended on the pre-configured token filters so I had to touch them:
* `GetTermVectorsCheckDocFreqIT` depended on `type_as_payload` but didn't do anything important with it. I dropped the dependency. Then I moved the test to a single node test case because we're trying to cut down on the number of `ESIntegTestCase` subclasses.
* `AbstractTermVectorsTestCase` and its subclasses depended on `type_as_payload`. I dropped their usage of the token filter and added an integration test for the termvectors API that uses `type_as_payload` to the `analysis-common` module.
* `AnalysisModuleTests` expected a few pre-configured token filtes be registered by default. They aren't any more so I dropped this assertion. We assert that the `CommonAnalysisPlugin` registers these pre-built token filters in `CommonAnalysisFactoryTests`
* `SearchQueryIT` and `SuggestSearchIT` had tests that depended on the specific behavior of the token filters so I moved the tests to integration tests in `analysis-common`.
In scripts (at least some of the languages), the terms dictionary and
postings can be access with the special _index variable. This is for
very advanced use cases which want to do their own scoring. The problem
is segment level statistics must be recomputed for every document.
Additionally, this is not friendly to the terms index caching as the
order of looking up terms should be controlled by lucene.
This change removes _index from scripts. Anyone using it can and should
instead write a Similarity plugin, which is explicitly designed to allow
doing the calculations needed for a relevance score.
closes#19359
Today when an index is `read-only` the index is also blocked from
being deleted which sometimes is undesired since in-order to make
changes to a cluster indices must be deleted to free up space. This is
a likely scenario in a hosted environment when disk-space is limited to switch
indices read-only but allow deletions to free up space.
This moves the releasing logic to the base test, so that individual test cases don't need
to worry about releasing the aggregators. It's not a big deal for individual aggs,
but once tests start using sub-aggs, it can become tricky to free (without double-freeing)
all the aggregators.
Range queries with now based date ranges were previously not allowed,
but since #23921 these queries were allowed. This change should really
fix range queries with now based date ranges.
Adding a unit test to InternalAdjecencyMatrix that extends the shared InternalAggregationTestCase
that we use for testing aggregations.
Relates to #22278
The test check that the number of outgoing/incoming recoveries of a shard is 0 after recoveries were done. Sadly that is not guaranteed by the current recovery logic as we decrement the counters only when all references to the relevant RecoveryTarget object have been released. This may happen in an async fashion to the recovery completion which causes the test to fail. I looked at options to change the recovery logic to have the recovery counters decrease before the recovery is done *under normally circumstances* but I don't see a clean way to do it. Since it won't give hard guarantees anyway I opted to add assertBusy to the test
Closes#24669
This commit adds a deprecation warning if `_index` is used in scripts.
It is emitted each time a script is invoked, but not per document. There
is no test because constructing a LeafIndexLookup is quite difficult,
but the deprecation warning does show up in IndexLookupIT, there is just
no way to assert warnings in integ tests.
relates #19359
Today we assert hart if failure listeners are invoked more than once. Yet, this
can happen if we cancel the execution since the caller and the handler will get
the exception on the cancelable threads and will notify the listener concurrently
if timinig allows. This commit relaxes the assertion towards handling multiple
invocations with `ExecutionCancelledException`
Closes#24010Closes#24179
Closes vagnerclementino/elasticsearch/#98
Template script engines (mustache, the only one) currently return a
BytesReference that users must know is utf8 encoded. This commit
modifies all callers and mustache to have the template engine return
String. This is much simpler, and does not require decoding in order to
use (for example, in ingest).
When retrieving documents to extract terms from as part of a more like this query, the _routing value can be set, yet it gets lost. That leads to not being able to retrieve the documents, hence more_like_this used to return no matches all the time.
Closes#23699
We generally accept string values when a boolean is expected. We've been doing that in our parsing code, but we missed that bit when moving parsing code to ObjectParser, which throws an error instead. This commit makes ObjectParser parse also string values into booleans. It throws an error in case the value is not `true` or `false`.
Closes#21802
The rendering methods in String and Long Significant String aggregations
and buckets are very similar. They can be factored out in the
InternalSignificantTerms class an InternalMappedSignificantTerms class.
This commit changes SignificantTerms.Bucket so that it is not an
abstract class anymore but an interface. It will be easier for the Java
High Level Rest Client to provide its own implementation of
SignificantTerms and SignificantTerms.Bucket. Also, it is now more
coherent with the others aggregations.
The disruption tests sit in a single test suite which causes these tests
to be single-threaded. We can split this test suite into multiple suites
(logically, of course) enabling them to be run in parallel reducing the
total run time of all integration tests in core. This commit splits the
discovery with service disruptions test suite into three suites
- master disruptions
- discovery disruptions
- cluster disruptions
The last one could probably be better named, it is meant to represent
performing actions in the cluster (indexing, failing a shard, etc.)
while a disruption is taking place.
Relates #24662
This commit removes the deprecated support for .yaml and .json files. If
the files still exist, the node will fail to start, indicating the file
must be converted or renamed.
closes#19391
With the current implementation, SniffNodesSampler might close the
current connection right after a request is sent but before the response
is correctly handled. This causes to timeouts in the transport client
when the sniffing is activated.
closes#24575closes#24557
When constructing an array list, if we know the size of the list in
advance (because we are adding objects to it derived from another list),
we should size the array list to the appropriate capacity in advance (to
avoid resizing allocations). This commit does this in various places.
Relates #24439
* Add parent-join module
This change adds a new module named `parent-join`.
The goal of this module is to provide a replacement for the `_parent` field but as a first step this change only moves the `has_child`, `has_parent` queries and the `children` aggregation to this module.
These queries and aggregations are no longer in core but they are deployed by default as a module.
Relates #20257
Today we prune transport handlers in TransportService when a node is disconnected.
This can cause connections to starve in the TransportService if the connection is
opened as a short living connection ie. without sharing the connection to a node
via registering in the transport itself. This change now moves to pruning based
on the connections cache key to ensure we notify handlers as soon as the connection
is closed for all connections not just for registered connections.
Relates to #24632
Relates to #24575
Relates to #24557
- Removes clusterState, getInitialClusterState and getMinimumMasterNodes methods from Discovery interface.
- Sets PingContextProvider in ZenPing constructor
- Renames state in ZenDiscovery to committedState
This commit documents how to write a `ScriptEngine` in order to use
expert internal apis, such as using Lucene directly to find index term
statistics. These documents prepare the way to remove both native
scripts and IndexLookup.
The example java code is actually compiled and tested under a new gradle
subproject for example plugins. This change does not yet breakup
jvm-example into the new examples dir, which should be done separately.
relates #19359
relates #19966
Specifying s3 access and secret keys inside repository settings are not
secure. However, until there is a way to dynamically update secure
settings, this is the only way to dynamically add repositories with
credentials that are not known at node startup time. This commit adds
back `access_key` and `secret_key` s3 repository settings, but protects
it with a required system property `allow_insecure_settings`.
This allows other plugins to use a client to call the functionality
that is in the core modules without duplicating the logic.
Plugins can now safely send the request and response classes via the
client even if the requests are executed locally. All relevant classes
are loaded by the core classloader such that plugins can share them.
This is re-adds this commit that was revered in 952feb58e4
If we fail to acquire the shard lock, need to retry and wait for the new
cluster state, we were sending the wrong kind of request for the replica
action. This commit fixes this issue.
This commit adds support for histogram and date_histogram agg compound order by refactoring and reusing terms agg order code. The major change is that the Terms.Order and Histogram.Order classes have been replaced/refactored into a new class BucketOrder. This is a breaking change for the Java Transport API. For backward compatibility with previous ES versions the (date)histogram compound order will use the first order. Also the _term and _time aggregation order keys have been deprecated; replaced by _key.
Relates to #20003: now that all these aggregations use the same order code, it should be easier to move validation to parse time (as a follow up PR).
Relates to #14771: histogram and date_histogram aggregation order will now be validated at reduce time.
Closes#23613: if a single BucketOrder that is not a tie-breaker is added with the Java Transport API, it will be converted into a CompoundOrder with a tie-breaker.
This commit addresses an issue in the missing active IDs prevent advance
test from the global checkpoint tracker. The assumptions this test was
making about reality were violated when global checkpoints were inlined
(specifically, the component of that change where the tracker's
knowledge of the global checkpoint was updated inline with updates to
the tracker's knowledge of local checkpoints for an allocatio ID). The
point of the test was to ensure that a lagging shard prevents the global
checkpoint from advancing, so this commit rewrites the test with that in
mind.
This allows other plugins to use a client to call the functionality
that is in the core modules without duplicating the logic.
Plugins can now safely send the request and response classes via the
client even if the requests are executed locally. All relevant classes
are loaded by the core classloader such that plugins can share them.
With global checkpoints we also take into account if a global checkpoint must be fsynced.
Yet, with recent addition of inlining global checkpoints into indexing operations from a
test perspective unnecessary fsyncs might be reported if `Translog#syncNeeded` is checked.
Now the test only check if the last write location triggers an fsync instead.
Closes#24600
This commit fixes an issue in the checkpoints advance test. Namely, when
there zero documents indexed, after the global checkpoint is synced, the
global checkpoint will have advanced to the no ops performed. There is a
larger conceptual problem here, namely that the primary does not update
its knowledge of its own local checkpoint upon recovery which causes the
global checkpoint to initially be unassigned and then advance to no ops
performed, but this will be addressed in a follow-up.
This adds parsing to all implementations of SingleBucketAggregations. They are mostly similar, so they share the common
base class `ParsedSingleBucketAggregation` and the shared base test `InternalSingleBucketAggregationTestCase`.
Previously query weight was created for each search hit that needed to compute inner hits,
with this change the weight of the inner hit query is computed once for all search hits.
Closes#23917
We allow non-dynamic settings to be updated on closed indices but we don't
check if the updated settings can be used to open/create the index.
This can lead to unrecoverable state where the settings are updated but the index
cannot be reopened since the settings are not valid. Trying to update the invalid settings
is also not possible since the update will fail to validate the current settings.
This change adds the validation of the updated settings for closed indices and make sure that the new settings
do not prevent the reopen of the index.
Fixes#23787
Previously, if a master node updated the cluster state to reflect that a
snapshot is completed, but subsequently failed before processing a
cluster state to remove the snapshot from the cluster state, then the
newly elected master would not know that it needed to clean up the
leftover cluster state.
This commit ensures that the newly elected master sees if there is a
snapshot in the cluster state that is in the completed state but has not
yet been removed from the cluster state.
Closes#24452
There are now three public static method to build instances of
PreConfiguredTokenFilter and the ctor is private. I chose static
methods instead of constructors because those allow us to change
out the implementation returned if we so desire.
Relates to #23658
We have to do something to force the global checkpoint to be
synchronized to the replicas or the assertions at the end of the test
that they are in sync will trip. Since the last write operation to hit a
replica shard will only carry the penultimate global checkpoint (it will
advance when the replicas respond with their local checkpoint), and a
background sync will not happen until the primary shard falls idle, we
force a sync through a refresh action.
Currently, the get snapshots API (e.g. /_snapshot/{repositoryName}/_all)
provides information about snapshots in the repository, including the
snapshot state, number of shards snapshotted, failures, etc. In order
to provide information about each snapshot in the repository, the call
must read the snapshot metadata blob (`snap-{snapshot_uuid}.dat`) for
every snapshot. In cloud-based repositories, this can be expensive,
both from a cost and performance perspective. Sometimes, all the user
wants is to retrieve all the names/uuids of each snapshot, and the
indices that went into each snapshot, without any of the other status
information about the snapshot. This minimal information can be
retrieved from the repository index blob (`index-N`) without needing to
read each snapshot metadata blob.
This commit enhances the get snapshots API with an optional `verbose`
parameter. If `verbose` is set to false on the request, then the get
snapshots API will only retrieve the minimal information about each
snapshot (the name, uuid, and indices in the snapshot), and only read
this information from the repository index blob, thereby giving users
the option to retrieve the snapshots in a repository in a more
cost-effective and efficient manner.
Closes#24288
It was using the wrong version, which can cause errors like
```
1> java.security.AccessControlException: access denied ("java.net.SocketPermission" "[0:0:0:0:0:0:0:1]:34221" "connect,resolve")
1> at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:1.8.0_111]
1> at java.security.AccessController.checkPermission(AccessController.java:884) ~[?:1.8.0_111]
1> at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) ~[?:1.8.0_111]
1> at java.lang.SecurityManager.checkConnect(SecurityManager.java:1051) ~[?:1.8.0_111]
1> at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:625) ~[?:?]
1> at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processSessionRequests(DefaultConnectingIOReactor.java:273) ~[httpcore-nio-4.4.5.jar:4.4.5]
1> at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:139) ~[httpcore-nio-4.4.5.jar:4.4.5]
1> at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:348) ~[httpcore-nio-4.4.5.jar:4.4.5]
1> at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:192) ~[httpasyncclient-4.1.2.jar:4.1.2]
1> at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64) ~[httpasyncclient-4.1.2.jar:4.1.2]
```
When running tests
If we don't do this, although we don't set the subAggsSupplier again, we will reuse the one set in the previous run, hence we can end up in situations where we keep on generating sub aggregations till a StackOverflowError gets thrown.