This is the first step towards adaptive replica selection (#24915). This PR
tracks the execution time, also known as the "service time" of a task in the
threadpool. The `QueueResizingEsThreadPoolExecutor` then stores a moving average
of these task times which can be retrieved from the executor.
Currently there is no functionality using the EWMA yet (other than tests), this
is only a bite-sized building block so that it's easier to review.
[1]: EWMA = Exponentially Weighted Moving Average
hold (resizing can result in a smaller size than the current size, while
the assert attempted to verify the new size is always greater than the
current).
REST handlers that require a body will throw an an ElasticsearchParseException "request body required".
REST handlers that require a body OR source param will throw an ElasticsearchParseException "request body or source param required".
Replaced asserts in BulkRequest parsing code with a more descriptive IllegalArgumentException if the line contains an empty object.
Updated bulk REST test to verify an empty action line is rejected properly.
Updated BulkRequestTests with randomized testing for an empty action line.
Used try-with-resouces for XContentParser in AbstractBulkByQueryRestHandler.
This commit exposes the secure settings in Settings.Builder, so that
the current secure settings can be retrieved and added to when creating
settings for tests. This is necessary since secure settings can only be
added once to a builder, so chains of methods using settings builders
must reuse the already set mock secure settings.
When the jarhell check fails due to a duplicate jar on the classpath,
the exception message includes the full classpath but not the duplicated
jar. For a long classpath, this can make it difficult to find the jar
that is duplicated. This commit changes the exception message to include
the duplicated jar.
Relates #24953
This removes the parsing of things like `GET /idx/_aliases,_mappings`, instead,
a user must choose between retriving all index metadata with `GET /idx`, or only
a specific form such as `GET /idx/_settings`.
Relates to (and is a prerequisite of) #24437
This commit creates TemplateScript and associated classes so that
templates no longer need a special ScriptService.compileTemplate method.
The execute() method is equivalent to the old run() method.
relates #20426
Previously, when allocating bytes for a BigArray, the array was created
(or attempted to be created) and only then would the array be checked
for the amount of RAM used to see if the circuit breaker should trip.
This is problematic because for very large arrays, if creating or
resizing the array, it is possible to attempt to create/resize and get
an OOM error before the circuit breaker trips, because the allocation
happens before checking with the circuit breaker.
This commit ensures that the circuit breaker is checked before all big
array allocations (note, this does not effect the array allocations that
are less than 16kb which use the [Type]ArrayWrapper classes found in
BigArrays.java). If such an allocation or resizing would cause the
circuit breaker to trip, then the breaker trips before attempting to
allocate and potentially running into an OOM error from the JVM.
Closes#24790
In #24605, logic was implemented to ensure that completed snapshots were
properly removed from the cluster state upon a change in master nodes.
This commit removes redundant logic that also attempted to clean up
completed snapshots from the cluster state on master election, but only
covered a limited case that was remedied in #24605.
This commit also adds a test to ensure cleaning up of completed
snapshots at the right moment in time when a master election happens
before finalizing a snapshot, as well as adds a check to handle the case
where the old master and new master could attempt to finalize the
snapshot and write the same blob to the repository simultaneously.
This removes the `accumulateExceptions()` method (and its usage) from `TransportNodesAction` and `TransportTasksAction`, forcing both transport actions to always accumulate exceptions.
Without this change, some transport actions, like `TransportNodesStatsAction` would respond in very unexpected ways by returning no response due to some failure, but instead of returning an
error the response would simply be empty: no response and no error.
This results in a very trappy response structure where users can check for an error, then attempt to blindly use the response when no error is returned.
This commit reduces the number of buckets that are generated for multi
bucket aggregations in AggregationsTests and SearchResponseTests.
The number of buckets are now limited to a maximum of 3 but before some
aggregations could generate up to 10 buckets.
We can hit an already closed exception when filling the gaps after
blocking operations when updating the primary term on a promoted replica
shard. We should catch this and suppress it as it is an expected outcome
instead of letting it bubble up which leads to trying to fail the shard
which throws yet another already closed exception.
Relates #25021
Some response classes in the java api expose both `getTook()` which returns a `TimeValue` and `getTookInMillis` which returns a `long` value. `getTook()` is enough as one can do `getTook().millis()` to obtain the same result as `getTookInMillis()`, which can be removed.
* Adds nodes usage API to monitor usages of actions
The nodes usage API has 2 main endpoints
/_nodes/usage and /_nodes/{nodeIds}/usage return the usage statistics
for all nodes and the specified node(s) respectively.
At the moment only one type of usage statistics is available, the REST
actions usage. This records the number of times each REST action class is
called and when the nodes usage api is called will return a map of rest
action class name to long representing the number of times each of the action
classes has been called.
Still to do:
* [x] Create usage service to store usage statistics
* [x] Record usage in REST layer
* [x] Add Transport Actions
* [x] Add REST Actions
* [x] Tests
* [x] Documentation
* Rafactors UsageService so counts are done by the handlers
* Fixing up docs tests
* Adds a name to all rest actions
* Addresses review comments
This commit adds a new bg_count field to the REST response of
SignificantTerms aggregations. Similarly to the bg_count that already
exists in significant terms buckets, this new bg_count field is set at
the aggregation level and is populated with the superset size value.
This commit adds an optional `context` url parameter to the put stored
script request. When a context is specified, the script is compiled
against that context before storing, as a validation the script will
work when used in that context.
Today there is a lot of code duplication and different handling of errors
in the two different scroll modes. Yet, it's not clear if we keep both of
them but this simplification will help to further refactor this code to also
add cross cluster search capabilities.
This refactoring also fixes bugs when shards failed due to the node dropped out of the cluster in between scroll requests and failures during the fetch phase of the scroll. Both places where simply ignoring the failure and logging to debug. This can cause issues like #16555
This commit provides the TransportRequest that caused the retrieval of a search context to the
SearchOperationListener#validateSearchContext method so that implementers have access to the
request.
By default, the remove plugin CLI command preserves configuration
files. This is so that if a user is upgrading the plugin (which is done
by first removing the old version and then installing the new version)
they do not lose their configuration file. Yet, there are circumstances
where preserving the configuration file is not desired. This commit adds
a purge option to the remove plugin CLI command.
Relates #24981
Currently, the decisions regarding which translog generation files to delete are hard coded in the interaction between the `InternalEngine` and the `Translog` classes. This PR extracts it to a dedicated class called `TranslogDeletionPolicy`, for two main reasons:
1) Simplicity - the code is easier to read and understand (no more two phase commit on the translog, the Engine can just commit and the translog will respond)
2) Preparing for future plans to extend the logic we need - i.e., retain multiple lucene commit and also introduce a size based retention logic, allowing people to always keep a certain amount of translog files around. The latter is useful to increase the chance of an ops based recovery.
If delimiter or replacement parameter are an empty string, the error is not clear enough to indicate how to fix it.
With this change, the user knows these parameter must be a non empty string.
This is related to #24927. There was a small possibility that a test
was attempting to compress a stream with zero bytes. This was causing
a failure.
This test now requires at least one byte.
This is a follow-up to #23941. Currently there are a number of
complexities related to compression. The raw DeflaterOutputStream must
be closed prior to sending bytes to ensure that EOS bytes are written.
But the underlying ReleasableBytesStreamOutput cannot be closed until
the bytes are sent to ensure that the bytes are not reused.
Right now we have three different stream references hanging around in
TCPTransport to handle this complexity. This commit introduces
CompressibleBytesOutputStream to be one stream implemenation that will
behave properly with or without compression enabled.
This metric is not used in the ES codebase at all. It's also not as likely to be
used since it relies on a periodic "tick", which we don't currently use.
The took time computed for search requests does not take in account the expand search phase.
This change delays the computation to after the expand phase finishes.
Relates #24900
This makes profiling classes acquire a timer up-front that can be then reused
across all calls, in order to save bound checks for methods that are called in
tight loops.
ScriptContexts currently understand a FactoryType that can produce
instances of the script InstanceType. However, for search scripts, this
does not work as we have the concept of LeafSearchScript that is created
per lucene segment. This commit effectively renames the existing
SearchScript class into SearchScript.LeafFactory, which is a new,
optional, class that can be defined within a ScriptContext.
LeafSearchScript is effectively renamed back into SearchScript. This
change allows the model of stateless factory -> stateful factory ->
script instance to continue, but in a generic way that any script
context may take advantage of.
relates #20426
In previous work, we refactored the delay mechanism in index shard
operation permits to allow for async delaying of acquisition. This
refactoring made explicit when permit acquisition is disabled whereas
previously we were relying on an implicit condition, namely that all
permits were acquired by the thread trying to delay acquisition. When
using the implicit mechanism, we tried to acquire a permit and if this
failed, we returned a null releasable as an indication that our
operation should be queued. Yet, now we know when we are delayed and we
should not even try to acquire a permit. If we try to acquire a permit
and one is not available, we know that we are not delayed, and so
acquisition should be successful. If it is not successful, something is
deeply wrong. This commit takes advantage of this refactoring to
simplify the internal implementation.
Relates #24971
When a primary is promoted, it could have gaps in its history due to
concurrency and in-flight operations when it was serving as a
replica. This commit fills the gaps in the history of the promoted shard
after all operations from the previous term have drained, and future
operations are blocked. This commit does not handle replicating the
no-ops that fill the gaps to any remaining replicas, that is the
responsibility of the primary/replica sync that we are laying the ground
work for.
Relates #24945
`terms` aggregations at the root level use the `global_ordinals` execution hint by default.
When all sub-aggregators can be run in `breadth_first` mode the collected buckets for these sub-aggs are dense (remapped after the initial pruning).
But if a sub-aggregator is not deferrable and needs to collect all buckets before pruning we don't remap global ords and the aggregator needs to deal with sparse buckets.
Most (if not all) aggregators expect dense buckets and uses this information to allocate memories.
This change forces the remap of the global ordinals but only when there is at least one sub-aggregator that cannot be deferred.
Relates #24788
This commit introduces a clean transition from the old primary term to
the new primary term when a replica is promoted primary. To accomplish
this, we delay all operations before incrementing the primary term. The
delay is guaranteed to be in place before we increment the term, and
then all operations that are delayed are executed after the delay is
removed which asynchronously happens on another thread. This thread does
not progress until in-flight operations that were executing are
completed, and after these operations drain, the delayed operations
re-acquire permits and are executed.
Relates #24925
Within two lines of each other appears "fallthrough" and "fall through",
both typed by the same person who should have been paying better
attention and only one of these is correct and the inconsistency is
bothersome. This commit fixes the errant one.
Drops `TokenizerFactory#name`, replacing it with
`CustomAnalyzer#getTokenizerName` which is much better targeted at
its single use case inside the analysis API.
Drops a test that I would have had to refactor which is duplicated by
`AnalysisModuleTests`.
To keep this change from blowing up in size I've left two mostly
mechanical changes to be done in followups:
1. `TokenizerFactory` can now be entirely dropped and replaced with
`Supplier<Tokenizer>`.
2. `AbstractTokenizerFactory`'s ctor still takes a `String` parameter
where the name once was.
If the bucket already exists, due to non-overlapping series or missing data, the
MovAvg creates a merged bucket with the existing aggs + the new prediction. This
fixes a small bug where the doc_count was not being set correctly.
Relates to #24327
Today if the primary throws an exception while handling the replica
response (e.g., because it is already closed while updating the local
checkpoint for the replica), or because of a bug that causes an
exception to be thrown in the replica operation listener, this exception
is caught by the underlying transport handler plumbing and is translated
into a response handler failure transport exception that is passed to
the onFailure method of the replica operation listener. This causes the
primary to turn around and fail the replica which is a disastrous and
incorrect outcome as there's nothing wrong with the replica, it is the
primary that is broken and deserves a paddlin'. This commit handles this
situation by failing the primary.
Relates #24926
This commit adds a second refresh to the concurrent relocation
test. This is necessary as the first refresh might have brought back a
local checkpoint for a shard that a newly relocated primary became aware
of but did not yet receive a local checkpoint for that shard. When that
local checkpoint arrives on the new primary, the global checkpoint could
advance again and so we need a second replication action to push that
global checkpoint back out to the replica. This is indeed a hack, and it
will eventually be removed.
Closes#24599
The `IndexDeletionPolicy` is currently instantiated by `IndexShard` and is then passed through to the engine as a parameter. That's a shame as it is really just an implementation detail and the engine already has a method to acquire a commit.
This is preparing for a follow up PR that will we connect the index deletion policy with a new translog deletion policy.
Relates to #10708
The order in which double values are added in Java can give different results,
so in testing the sum and sumOfSquares we need to allow some delta for testing
equality. The difference can be larger for large sum values, so we should
account for this by making the delta in the assertion depend on the values
magnitude.
Closes#24931
ClearScrollResponse can print out its content into an XContentBuilder as it implements ToXContentObject. This PR add a fromXContent method to it so that we are able to recreate the response object when parsing the response back. This will be used in the high level REST client.
ClearScrollRequest can be created from a request body, but it doesn't support the opposite, meaning printing out its content to an XContentBuilder. This is useful to the high level REST client and allows for better testing of what we parse.
Moved parsing method from RestClearScrollAction to ClearScrollRequest so that fromXContent and toXContent sit close to each other. Added unit tests to verify that body parameters override query_string parameters when both present (there is already a yaml test for this but unit test is even better)
SearchScrollRequest can be created from a request body, but it doesn't support the opposite, meaning printing out its content to an XContentBuilder. This is useful to the high level REST client and allows for better testing of what we parse.
Moved parsing method from RestSearchScrollAction to SearchScrollRequest so that fromXContent and toXContent sit close to each other. Added unit tests to verify that body parameters override query_string parameters when both present (there is already a yaml test for this but unit test is even better)
When proportioning the shared RAM bytes across the shards of the query
cache, there's a computation that shares these bytes according to the
relative size of the shard cache to the total size of all the shard
caches. This computation had a bug where integer division was performed
instead which leads to this computation often being zero. This commit
fixes this bug by casting the numerator to a double before doing the
division so that double division is performed.
Relates #24856
The Lucene version constants for 5.4.1 and 5.5.0 are wrong, they are
listed as 6.5.0 instead of 6.5.1. This commit fixes these issues, and
adds a test to ensure that this does not happen again.
Relates #24923
This commit fixes a double decrement bug on the current query
counter. The double decrement arises in a situation when the fetch phase
is inlined for a query that is only touching one shard. After the query
phase succeeds we decrement the current query counter. If the fetch
phase ultimately fails, an exception is thrown and we decrement the
current query counter again in the catch block. We also add assertions
that all current stats counters remain non-negative at all
times.
Relates #24922
Removes the need for the `_UNRELEASED` suffix on versions by detecting if a version should be unreleased or not based on the versions around it. This should make it simpler to automate the task of adding a new version label.
In #23093 we made a change so that total bytes for a filesystem would not be a
negative value when the total bytes were > Long.MAX_VALUE.
This fixes#24453 which had a related issue where `available` and `free` bytes
could also be so large that they were negative. These will now return
`Long.MAX_VALUE` for the bytes if the JDK returns a negative value.
These tests spin up two nodes of an older version of Elasticsearch,
create some stuff, shut down the nodes, start the current version,
and verify that the created stuff works.
You can run `gradle qa:full-cluster-restart:check` to run these
tests against the head of the previous branch of Elasticsearch
(5.x for master, 5.4 for 5.x, etc) or you can run
`gradle qa:full-cluster-restart:bwcTest` to run this test against
all "index compatible" versions, one after the other. For master
this is every released version in the 5.x.y version *and* the tip
of the 5.x branch.
I'd love to add more to these tests in the future but these
currently just cover the functionality of the `create_bwc_index.py`
script and start to cover the assertions in the
`OldIndexBackwardsCompatibilityIT` test.
This commit renames the concept of the "compiled type" to a "factory
type", along with all implementations of this class to be named Factory.
This brings it inline with the classes purpose.
This commit adds collection of all contexts to the parameters of
getScriptEngine. This will allow script engines like painless to
precache extra information about the contexts.
This test is failing sporadically and for now we mute it as we have a
failure with additional logging that should hopefully enable us to
assess the situation.
This is a simple refactoring to move the context definitions into the
type that they use. While we have multiple context names for the same
class at the moment, this will eventually become one ScriptContext per
instance type, so the pattern of a static member on the interface called
CONTEXT can be used. This commit also moves the consolidated list of
contexts provided by core ES into ScriptModule.
This change cleans up some missed TODOs for content type detection on the source of put mapping and
put index template requests. In 5.3.0 and newer versions, the source is always JSON so the content
type detection is not needed. The TODOs were missed after the change was backported to 5.3.
Relates #24798
This commit adds the ability to store and retrieve data that should be associated with a
ScrollContext. Additionally the ScrollContext was made final as we should only have a single
implementation of this concept.
This commit changes the compile method of ScriptEngine to be generic in
the same way it is on ScriptService. This moves the shim of handling the
two existing context classes into each script engine, so that each
engine can be worked on independently to convert to real handling of
contexts.
When developing the new ScriptContext, the compiled type was original
generic, so that the instance type was also necessary. However, since
CompiledType is all that is used by the compile method signature, we
actually don't need the instance type to be generic. This commit removes
the InstanceType, and finds the Class for it through reflection on the
CompiledType method.
This commit modifies the compile method of ScriptService to be context
aware. The ScriptContext is now a generic class which contains both the
instance type and compiled type for a script. Instance type may be
stateful (for example, pre loading field information for the index a
script will execute on, like in expressions), while the compiled type is
stateless and used to construct instance type instances. This change is
only a first step to cutover ScriptService to the new paradigm. It only
converts callers to the script service, and has a small shim to wrap
compilation from the script engines to support the current two fixed
instance types, SearchScript and ExecutableScript.
Since groovy was removed, we no longer have any ScriptEngines with
resources to release. We may want to keep the option open for a script
engine to close resources, but this would not be common. This commit
adds a default implementation to ScriptEngine for `close()` to reduce
the boiler plate that must be added for a ScriptEngine implementation.
This commit increases the logging level on the index and relocate
concurrently test to obtain some insight into the global checkpoint
moving backwards.
The current log tries make sure we waited some (but not too long). This is unpredictable and fails all the time. This commit removes all of it and just make sure that we throw the right exceptions after timing out.
Fixes#24369
* SignificantText aggregation - like significant_terms but doesn’t require fielddata=true, recommended used with `sampler` agg to limit expense of tokenizing docs and takes optional `filter_duplicate_text`:true setting to avoid stats skew from repeated sections of text in search results.
Closes#23674
With #24779 in place, we can now guaranteed that a single translog generation file will never have a sequence number conflict that needs to be resolved by looking at primary terms. These conflicts can a occur when a replica contains an operation which isn't part of the history of a newly promoted primary. That primary can then assign a different operation to the same slot and replicate it to the replica.
PS. Knowing that each generation file is conflict free will simplifying repairing these conflicts when we read from the translog.
PPS. This PR also fixes some bugs in the piping of primary terms in the bulk shard action. These bugs are a result of the legacy of IndexRequest/DeleteRequest being a ReplicationRequest. We need to change that as a follow up.
Relates to #10708
This commit cleans up tests which currently use custom script engine
implementations, converting them to use a MockScriptEngine with script
functions provided by the tests. It also creates a common set of metric
scripts which were copied across a couple metric agg tests.
A user reported uneven balancing of load on nodes handling search requests from Kibana which supplies a session ID in a routing preference. Each shardId was selecting the same node for a given session ID because one data node had all primaries and the other data node held all replicas after cluster startup.
This change counteracts the tendency to opt for the same node given the same user-supplied preference by incorporating shard ID in the hash of the preference key. This will help randomise node choices across shards.
Closes#24642
This commit adds comments to org.elasticsearch.Assertions that disables
IntelliJ from complaining about using assert with side-effects, and
using constant conditions there as the side-effect with a constant
condition is intentionally employed.
Today in the code base we have lots of ugly code blocks like:
boolean assertionsEnabled = false;
assert assertionsEnabled = true;
if (assertionsEnabled) {
// something
}
These are a nuisance. Instead, we can do this in exactly one place and
replace these blocks with
if (Assertions.ENABLED) {
// something
}
The cool thing here is that since this is a static final field, the JIT
can optimize away the check at runtime if assertions are disabled.
Relates #24834
This commit moves the handling of nested and parent/child inner hits to specialized classes that can be defined outside of ES core.
InnerHitBuilderContext is now used by the parent query (nested or hasChild, ...) to build the sub context from the InnerHitBuilder definition.
BWC is also ensured so that nodes in previous versions can still send/receive inner hits to/from this version.
Relates #20257
After releasing 5.3.2, the 5.3.3 version constant was created. However,
this causes issues for the rolling upgrade tests, which expect to have
all older versions artifacts published and no point releases created off
of the older versions (older meaning more than one version behind the
current version). This commit removes the 5.3.3 version constant,
assuming we will not need it anywhere.
As we work towards contexts implying the return type of compilation, we
first need ScriptContext to not be an enum. This commit removes the
Standard enum and Plugin subclass of ScriptContext.
This commit fixes the RangeFieldMapper and RangeQueryBuilder to pass the correct relation to the RangeQuery when performing a range query over range fields.
Currently a `delete document` request against a non-existing index actually **creates** this index.
With this change the `delete document` no longer creates the previously non-existing index and throws an `index_not_found` exception instead.
However as discussed in https://github.com/elastic/elasticsearch/pull/15451#issuecomment-165772026, if an external version is explicitly used, the current behavior is preserved and the index is still created and the document is marked for deletion.
Fixes#15425
This commit is a simple cleanup to remove an unnecessary extra method on
ScriptService which was only used in 3 places. There is now only one
search method.
ScriptEngine implementations have an overridable method to indicate they
are safe to use as inline scripts. Since groovy was removed fro 6.0,
there are no longer any implementations which used the default false
value. Furthermore, the value was not actually read anywhere. This
commit removes the method. The ScriptEngineRegistry was also no longer
necessary as it only was used to build a map from language to engine.
This commit removes a convenience method from index shard that is used
at exactly one call site. This method is used to callback a listener
when an operation is on too old of a primary term. Since it is only used
at one call site, we simply inline the method.
Today a replica learns of a new primary term via a cluster state update
and there is not a clean transition between the older primary term and
the newer primary term. This commit modifies this situation so that:
- a replica shard learns of a new primary term via replication
operations executed under the mandate of the new primary
- when a replica shard learns of a new primary term, it blocks
operations on older terms from reaching the engine, with a clear
transition point between the operations on the older term and the
operations on the newer term
This work paves the way for a primary/replica sync on primary
promotion. Future work will also ensure a clean transition point on a
promoted primary, and prepare a replica shard for a sync with the
promoted primary.
Relates #24779
Allows plugins to register pre-configured tokenizers. Much
of the decisions are the same as those in #24223, #24572,
and #24223. This only migrates the lowercase tokenizer but
I figure that is a good start because it proves out the features.
This change removes the field data specialization needed for the parent field and replaces it with
a simple DocValuesIndexFieldData. The underlying global ordinals are retrieved via a new function called
IndexOrdinalsFieldData#getOrdinalMap.
The children aggregation is also modified to use a simple WithOrdinals value source rather than the deleted WithOrdinals.Parent.
Relates #20257
Today when we get a metadata snapshot from the index shard we ensure
that if there is no engine started on the shard that we lock the index
writer before we go and fetch the store metadata. Yet, if we concurrently
recover that shard, recovery finalization might fail since it can't acquire
the IW lock on the directory. This is mainly due to the wrong order of aquiring
the IW lock and the metadata lock. Fetching store metadata without a started engine
should block on the metadata lock in Store.java but since IndexShard locks the writer
first we get into a failed recovery dance especially in test. In production
this is less of an issue since we rarely get into this siutation if at all.
Closes#24481
The method should rather advance one token and only then require a START_OBJECT as the current token. This allows to parse given a parser that's at the beginning of the response, where the initial/current token is null.
Now the Java High Level Rest Client has tests to parse all aggregations,
this test is not needed anymore. We have better tests like
AggregationsTests and sub classes of InternalAggregationTestCase.
Related to #23965
This commit moves some functionality from PublishClusterStateAction to ZenDiscovery, which allows each class to focus on it's core competencies:
- PendingStatesQueue is now solely managed by ZenDiscovery (no shared access by both PublishClusterStateAction and ZenDiscovery)
- Validation logic is handled exclusively by ZenDiscovery
This commit is a simple refactoring of the update shard logic for
primaries. Namely, there was some duplicated code here that was annoying
to have to read twice so it is now collapsed with this commit.
We have decided not to force a future version upgrade to deal with this todo. Rather, we'll keep the code until its in our way / the opportunity arises to deal with it.
This PR revolves around places in the code where introducing a StringBuilder might make the construction
of a String easier to follow and also, maybe avoid a case where the compiler's very safe way of introducing
StringBuilder instead of String might not always be optimal for performance.
This fixes a bug in the 'date_histogram' aggregation that can happen when using 'extended_bounds'
together with some 'offset' parameter. Offsets should be applied after rounding the extended bounds
and also be applied when adding empty buckets during the reduce phase in InternalDateHistogram.
Closes#23776
Native scripts have been replaced in documentation by implementing
a ScriptEngine and they were deprecated in 5.5.0. This commit
removes the native script infrastructure for 6.0.
closes#19966
Shared settings were added intially to allow the few common settings
names across aws plugins. However, in 6.0 these settings have been
removed. The last use was in netty, but since 6.0 also has the netty 3
modules removed, there is no longer a need for the shared property. This
commit removes the shared setting property.
SearchResponse#fromXContent allows to parse a search response, including search hits, aggregations, suggestions and profile results. Only the aggs that we can parse today are supported (which means all of them but a couple that are left to support). SearchResponseTests reuses the existing test infra to randomize aggregations, suggestions and profile response.
Relates to #23331
This commit adds a new method to the SearchOperationListener that allows implementers to validate
the SearchContext immediately after it is retrieved from the active contexts. The listener may
throw a runtime exception if it deems the SearchContext is not valid and that the use of the context
should be terminated.
* [TEST] Fix TransportReplicationActionTests.testRetryOnReplica for replica request
We were improperly testing that it was a `ConcreteShardRequest` instead of a
`ConcreteReplicaRequest`. This adds that change and also ensures that the
checkpoint is retrievable from the request.
* Fix line-length
Approaching the release of 6.0 we need to sort out the usage of
`Version#minimumCompatibilityVersion` which was still set to 5.0.0.
Now this change moves it to the latest released version of 5.x (5.4 at this point)
to ensure we are compatible with the latest minor of the previous major. This change
also removes all the `_UNRELEASED` from the versions that where released and drops versions
that were never released and are not expected to be released (bugfixes in minors that are not
the latest in the previous major).
Today the `_field_caps` API doesn't implement its request serialization
correctly since indices and indices options are not serialized at all.
This will likely break with all transport clients etc. and if this request
must be send across the network. This commit fixes this and adds correct
handling if we have only remote indices to prevent the inclusion of
all local indices.
* Fix ArrayIndexOutOfBoundsException in Range Aggregation when no ranges are specified in the query
* Revert "Fix ArrayIndexOutOfBoundsException in Range Aggregation when no ranges are specified in the query"
This reverts commit ad57d8feb3577a64b37de28c6f3df96a3a49fe93.
* Fix range aggregation out of bounds exception when there are no ranges in a range or date_range query
* Fix range aggregation out of bounds exception when there are no ranges in the query
This fix is applied to range queries, date range queries, ip range queries and geo distance aggregation queries
Native scripts are no longer documented and instead using a ScriptEngine
is recommended. This change adds a deprecation warning for removal in
6.0.
relates #19966
This PR adds a new thread pool type: `fixed_auto_queue_size`. This thread pool
behaves like a regular `fixed` threadpool, except that every
`auto_queue_frame_size` operations (default: 10,000) in the thread pool,
[Little's Law](https://en.wikipedia.org/wiki/Little's_law) is calculated and
used to adjust the pool's `queue_size` either up or down by 50. A minimum and
maximum is taken into account also. When the min and max are the same value, a
regular fixed executor is used instead.
The `SEARCH` threadpool is changed to use this new type of thread pool. However,
the min and max are both set to 1000, meaning auto adjustment is opt-in rather
than opt-out.
Resolves#3890
Moves the remaining preconfigured token figured into the analysis-common module. There were a couple of tests in core that depended on the pre-configured token filters so I had to touch them:
* `GetTermVectorsCheckDocFreqIT` depended on `type_as_payload` but didn't do anything important with it. I dropped the dependency. Then I moved the test to a single node test case because we're trying to cut down on the number of `ESIntegTestCase` subclasses.
* `AbstractTermVectorsTestCase` and its subclasses depended on `type_as_payload`. I dropped their usage of the token filter and added an integration test for the termvectors API that uses `type_as_payload` to the `analysis-common` module.
* `AnalysisModuleTests` expected a few pre-configured token filtes be registered by default. They aren't any more so I dropped this assertion. We assert that the `CommonAnalysisPlugin` registers these pre-built token filters in `CommonAnalysisFactoryTests`
* `SearchQueryIT` and `SuggestSearchIT` had tests that depended on the specific behavior of the token filters so I moved the tests to integration tests in `analysis-common`.
In scripts (at least some of the languages), the terms dictionary and
postings can be access with the special _index variable. This is for
very advanced use cases which want to do their own scoring. The problem
is segment level statistics must be recomputed for every document.
Additionally, this is not friendly to the terms index caching as the
order of looking up terms should be controlled by lucene.
This change removes _index from scripts. Anyone using it can and should
instead write a Similarity plugin, which is explicitly designed to allow
doing the calculations needed for a relevance score.
closes#19359
Today when an index is `read-only` the index is also blocked from
being deleted which sometimes is undesired since in-order to make
changes to a cluster indices must be deleted to free up space. This is
a likely scenario in a hosted environment when disk-space is limited to switch
indices read-only but allow deletions to free up space.
This moves the releasing logic to the base test, so that individual test cases don't need
to worry about releasing the aggregators. It's not a big deal for individual aggs,
but once tests start using sub-aggs, it can become tricky to free (without double-freeing)
all the aggregators.
Range queries with now based date ranges were previously not allowed,
but since #23921 these queries were allowed. This change should really
fix range queries with now based date ranges.
Adding a unit test to InternalAdjecencyMatrix that extends the shared InternalAggregationTestCase
that we use for testing aggregations.
Relates to #22278
The test check that the number of outgoing/incoming recoveries of a shard is 0 after recoveries were done. Sadly that is not guaranteed by the current recovery logic as we decrement the counters only when all references to the relevant RecoveryTarget object have been released. This may happen in an async fashion to the recovery completion which causes the test to fail. I looked at options to change the recovery logic to have the recovery counters decrease before the recovery is done *under normally circumstances* but I don't see a clean way to do it. Since it won't give hard guarantees anyway I opted to add assertBusy to the test
Closes#24669
This commit adds a deprecation warning if `_index` is used in scripts.
It is emitted each time a script is invoked, but not per document. There
is no test because constructing a LeafIndexLookup is quite difficult,
but the deprecation warning does show up in IndexLookupIT, there is just
no way to assert warnings in integ tests.
relates #19359
Today we assert hart if failure listeners are invoked more than once. Yet, this
can happen if we cancel the execution since the caller and the handler will get
the exception on the cancelable threads and will notify the listener concurrently
if timinig allows. This commit relaxes the assertion towards handling multiple
invocations with `ExecutionCancelledException`
Closes#24010Closes#24179
Closes vagnerclementino/elasticsearch/#98
Template script engines (mustache, the only one) currently return a
BytesReference that users must know is utf8 encoded. This commit
modifies all callers and mustache to have the template engine return
String. This is much simpler, and does not require decoding in order to
use (for example, in ingest).
When retrieving documents to extract terms from as part of a more like this query, the _routing value can be set, yet it gets lost. That leads to not being able to retrieve the documents, hence more_like_this used to return no matches all the time.
Closes#23699
We generally accept string values when a boolean is expected. We've been doing that in our parsing code, but we missed that bit when moving parsing code to ObjectParser, which throws an error instead. This commit makes ObjectParser parse also string values into booleans. It throws an error in case the value is not `true` or `false`.
Closes#21802
The rendering methods in String and Long Significant String aggregations
and buckets are very similar. They can be factored out in the
InternalSignificantTerms class an InternalMappedSignificantTerms class.
This commit changes SignificantTerms.Bucket so that it is not an
abstract class anymore but an interface. It will be easier for the Java
High Level Rest Client to provide its own implementation of
SignificantTerms and SignificantTerms.Bucket. Also, it is now more
coherent with the others aggregations.
The disruption tests sit in a single test suite which causes these tests
to be single-threaded. We can split this test suite into multiple suites
(logically, of course) enabling them to be run in parallel reducing the
total run time of all integration tests in core. This commit splits the
discovery with service disruptions test suite into three suites
- master disruptions
- discovery disruptions
- cluster disruptions
The last one could probably be better named, it is meant to represent
performing actions in the cluster (indexing, failing a shard, etc.)
while a disruption is taking place.
Relates #24662
This commit removes the deprecated support for .yaml and .json files. If
the files still exist, the node will fail to start, indicating the file
must be converted or renamed.
closes#19391
With the current implementation, SniffNodesSampler might close the
current connection right after a request is sent but before the response
is correctly handled. This causes to timeouts in the transport client
when the sniffing is activated.
closes#24575closes#24557
When constructing an array list, if we know the size of the list in
advance (because we are adding objects to it derived from another list),
we should size the array list to the appropriate capacity in advance (to
avoid resizing allocations). This commit does this in various places.
Relates #24439
* Add parent-join module
This change adds a new module named `parent-join`.
The goal of this module is to provide a replacement for the `_parent` field but as a first step this change only moves the `has_child`, `has_parent` queries and the `children` aggregation to this module.
These queries and aggregations are no longer in core but they are deployed by default as a module.
Relates #20257
Today we prune transport handlers in TransportService when a node is disconnected.
This can cause connections to starve in the TransportService if the connection is
opened as a short living connection ie. without sharing the connection to a node
via registering in the transport itself. This change now moves to pruning based
on the connections cache key to ensure we notify handlers as soon as the connection
is closed for all connections not just for registered connections.
Relates to #24632
Relates to #24575
Relates to #24557
- Removes clusterState, getInitialClusterState and getMinimumMasterNodes methods from Discovery interface.
- Sets PingContextProvider in ZenPing constructor
- Renames state in ZenDiscovery to committedState
This commit documents how to write a `ScriptEngine` in order to use
expert internal apis, such as using Lucene directly to find index term
statistics. These documents prepare the way to remove both native
scripts and IndexLookup.
The example java code is actually compiled and tested under a new gradle
subproject for example plugins. This change does not yet breakup
jvm-example into the new examples dir, which should be done separately.
relates #19359
relates #19966
Specifying s3 access and secret keys inside repository settings are not
secure. However, until there is a way to dynamically update secure
settings, this is the only way to dynamically add repositories with
credentials that are not known at node startup time. This commit adds
back `access_key` and `secret_key` s3 repository settings, but protects
it with a required system property `allow_insecure_settings`.
This allows other plugins to use a client to call the functionality
that is in the core modules without duplicating the logic.
Plugins can now safely send the request and response classes via the
client even if the requests are executed locally. All relevant classes
are loaded by the core classloader such that plugins can share them.
This is re-adds this commit that was revered in 952feb58e4
If we fail to acquire the shard lock, need to retry and wait for the new
cluster state, we were sending the wrong kind of request for the replica
action. This commit fixes this issue.
This commit adds support for histogram and date_histogram agg compound order by refactoring and reusing terms agg order code. The major change is that the Terms.Order and Histogram.Order classes have been replaced/refactored into a new class BucketOrder. This is a breaking change for the Java Transport API. For backward compatibility with previous ES versions the (date)histogram compound order will use the first order. Also the _term and _time aggregation order keys have been deprecated; replaced by _key.
Relates to #20003: now that all these aggregations use the same order code, it should be easier to move validation to parse time (as a follow up PR).
Relates to #14771: histogram and date_histogram aggregation order will now be validated at reduce time.
Closes#23613: if a single BucketOrder that is not a tie-breaker is added with the Java Transport API, it will be converted into a CompoundOrder with a tie-breaker.
This commit addresses an issue in the missing active IDs prevent advance
test from the global checkpoint tracker. The assumptions this test was
making about reality were violated when global checkpoints were inlined
(specifically, the component of that change where the tracker's
knowledge of the global checkpoint was updated inline with updates to
the tracker's knowledge of local checkpoints for an allocatio ID). The
point of the test was to ensure that a lagging shard prevents the global
checkpoint from advancing, so this commit rewrites the test with that in
mind.
This allows other plugins to use a client to call the functionality
that is in the core modules without duplicating the logic.
Plugins can now safely send the request and response classes via the
client even if the requests are executed locally. All relevant classes
are loaded by the core classloader such that plugins can share them.
With global checkpoints we also take into account if a global checkpoint must be fsynced.
Yet, with recent addition of inlining global checkpoints into indexing operations from a
test perspective unnecessary fsyncs might be reported if `Translog#syncNeeded` is checked.
Now the test only check if the last write location triggers an fsync instead.
Closes#24600
This commit fixes an issue in the checkpoints advance test. Namely, when
there zero documents indexed, after the global checkpoint is synced, the
global checkpoint will have advanced to the no ops performed. There is a
larger conceptual problem here, namely that the primary does not update
its knowledge of its own local checkpoint upon recovery which causes the
global checkpoint to initially be unassigned and then advance to no ops
performed, but this will be addressed in a follow-up.
This adds parsing to all implementations of SingleBucketAggregations. They are mostly similar, so they share the common
base class `ParsedSingleBucketAggregation` and the shared base test `InternalSingleBucketAggregationTestCase`.
Previously query weight was created for each search hit that needed to compute inner hits,
with this change the weight of the inner hit query is computed once for all search hits.
Closes#23917
We allow non-dynamic settings to be updated on closed indices but we don't
check if the updated settings can be used to open/create the index.
This can lead to unrecoverable state where the settings are updated but the index
cannot be reopened since the settings are not valid. Trying to update the invalid settings
is also not possible since the update will fail to validate the current settings.
This change adds the validation of the updated settings for closed indices and make sure that the new settings
do not prevent the reopen of the index.
Fixes#23787
Previously, if a master node updated the cluster state to reflect that a
snapshot is completed, but subsequently failed before processing a
cluster state to remove the snapshot from the cluster state, then the
newly elected master would not know that it needed to clean up the
leftover cluster state.
This commit ensures that the newly elected master sees if there is a
snapshot in the cluster state that is in the completed state but has not
yet been removed from the cluster state.
Closes#24452
There are now three public static method to build instances of
PreConfiguredTokenFilter and the ctor is private. I chose static
methods instead of constructors because those allow us to change
out the implementation returned if we so desire.
Relates to #23658
We have to do something to force the global checkpoint to be
synchronized to the replicas or the assertions at the end of the test
that they are in sync will trip. Since the last write operation to hit a
replica shard will only carry the penultimate global checkpoint (it will
advance when the replicas respond with their local checkpoint), and a
background sync will not happen until the primary shard falls idle, we
force a sync through a refresh action.
Currently, the get snapshots API (e.g. /_snapshot/{repositoryName}/_all)
provides information about snapshots in the repository, including the
snapshot state, number of shards snapshotted, failures, etc. In order
to provide information about each snapshot in the repository, the call
must read the snapshot metadata blob (`snap-{snapshot_uuid}.dat`) for
every snapshot. In cloud-based repositories, this can be expensive,
both from a cost and performance perspective. Sometimes, all the user
wants is to retrieve all the names/uuids of each snapshot, and the
indices that went into each snapshot, without any of the other status
information about the snapshot. This minimal information can be
retrieved from the repository index blob (`index-N`) without needing to
read each snapshot metadata blob.
This commit enhances the get snapshots API with an optional `verbose`
parameter. If `verbose` is set to false on the request, then the get
snapshots API will only retrieve the minimal information about each
snapshot (the name, uuid, and indices in the snapshot), and only read
this information from the repository index blob, thereby giving users
the option to retrieve the snapshots in a repository in a more
cost-effective and efficient manner.
Closes#24288
It was using the wrong version, which can cause errors like
```
1> java.security.AccessControlException: access denied ("java.net.SocketPermission" "[0:0:0:0:0:0:0:1]:34221" "connect,resolve")
1> at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:1.8.0_111]
1> at java.security.AccessController.checkPermission(AccessController.java:884) ~[?:1.8.0_111]
1> at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) ~[?:1.8.0_111]
1> at java.lang.SecurityManager.checkConnect(SecurityManager.java:1051) ~[?:1.8.0_111]
1> at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:625) ~[?:?]
1> at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processSessionRequests(DefaultConnectingIOReactor.java:273) ~[httpcore-nio-4.4.5.jar:4.4.5]
1> at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:139) ~[httpcore-nio-4.4.5.jar:4.4.5]
1> at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:348) ~[httpcore-nio-4.4.5.jar:4.4.5]
1> at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:192) ~[httpasyncclient-4.1.2.jar:4.1.2]
1> at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64) ~[httpasyncclient-4.1.2.jar:4.1.2]
```
When running tests
If we don't do this, although we don't set the subAggsSupplier again, we will reuse the one set in the previous run, hence we can end up in situations where we keep on generating sub aggregations till a StackOverflowError gets thrown.
This commit terminates any controller processes plugins might have after
the node has been closed. This gives the plugins a chance to shut down their
controllers gracefully.
Previously there was a race condition where controller processes could be shut
down gracefully and terminated by two threads running in parallel, leading to
non-deterministic outcomes.
Additionally, controller processes that failed to shut down gracefully were
not forcibly terminated when running as a Windows service; there was a reliance
on the plugin to shut down its controller gracefully in this situation.
This commit also fixes this problem.
Adds a new "icu_collation" field type that exposes lucene's
ICUCollationDocValuesField. ICUCollationDocValuesField is the replacement
for ICUCollationKeyFilter which has been deprecated since Lucene 5.
This commit renames ScriptEngineService to ScriptEngine. It is often
confusing because we have the ScriptService, and then
ScriptEngineService implementations, but the latter are not services as
we see in other places in elasticsearch.
File scripts have 2 related settings: the path of file scripts, and
whether they can be dynamically reloaded. This commit deprecates those
settings.
relates #21798
A constraint on the global checkpoint was inadvertently committed from
the inlining global checkpoint work. Namely, the constraint prevents the
global checkpoint from advancing to no ops performed, a situation that
can occur when shards are started but empty.
Today we rely on background syncs to relay the global checkpoint under
the mandate of the primary to its replicas. This means that the global
checkpoint on a replica can lag far behind the primary. The commit moves
to inlining global checkpoints with replication requests. When a
replication operation is performed, the primary will send the latest
global checkpoint inline with the replica requests. This keeps the
replicas closer in-sync with the primary.
However, consider a replication request that is not followed by another
replication request for an indefinite period of time. When the replicas
respond to the primary with their local checkpoint, the primary will
advance its global checkpoint. During this indefinite period of time,
the replicas will not be notified of the advanced global
checkpoint. This necessitates a need for another sync. To achieve this,
we perform a global checkpoint sync when a shard falls idle.
Relates #24513
AggregationsTests#testFromXContent verifies that parsing of aggregations works by combining multiple aggs at the same level, and also adding sub-aggregations to multi bucket and single bucket aggs, up to a maximum depth of 5.
This changes the way we register pre-configured token filters so that
plugins can declare them and starts to move all of the pre-configured
token filters out of core. It doesn't finish the job because doing
so would make the change unreviewably large. So this PR includes
a shim that keeps the "old" way of registering pre-configured token
filters around.
The Lowercase token filter is special because there is a "special"
interaction between it and the lowercase tokenizer. I'm not sure
exactly what to do about it so for now I'm leaving it alone with
the intent of figuring out what to do with it in a followup.
This also renames these pre-configured token filters from
"pre-built" to "pre-configured" because that seemed like a more
descriptive name.
This is a part of #23658
This test waited 10 seconds for a refresh listener to appear in
the stats. It turns out that in our NFS testing infrastructure this can
take a lot longer than 10 seconds. The error reported here:
https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+nfs/257/consoleFull
has it taking something like 15 seconds. This bumps the timeout
to a solid minute.
Closes#24417
Now that indices have a single type by default, we can move to the next step
and identify documents using their `_id` rather than the `_uid`.
One notable change in this commit is that I made deletions implicitly create
types. This helps with the live version map in the case that documents are
deleted before the first type is introduced. Otherwise there would be no way
to differenciate `DELETE index/foo/1` followed by `PUT index/foo/1` from
`DELETE index/bar/1` followed by `PUT index/foo/1`, even though those are
different if versioning is involved.
If a node in version >= 5.3 acts as a coordinating node during a scroll request that targets a single shard, the scroll may return the same documents over and over iff the targeted shard is hosted by a node with a version <= 5.3.
The nodes in this version will advance the scroll only if the search_type has been set to `query_and_fetch` though this search type has been removed in 5.3.
This change handles this situation by adding the removed search_type in the request that targets a node in version <= 5.3.
Previously, Mustache would call `toString` on the `_ingest.timestamp`
field and return a date format that did not match Elasticsearch's
defaults for date-mapping parsing. The new ZonedDateTime class in Java 8
happens to do format itself in the same way ES is expecting.
This commit adds support for a feature flag that enables the usage of this new date format
that has more native behavior.
Fixes#23168.
This new fix can be found in the form of a cluster setting called
`ingest.new_date_format`. By default, in 5.x, the existing behavior
will remain the same. One will set this property to `true` in order to
take advantage of this update for ingest-pipeline convenience.
This commit fixes inefficient (worst case exponential) loading of
snapshot repository data when checking for incompatible snapshots,
that was introduced in #22267. When getting snapshot information,
getRepositoryData() was called on every snapshot, so if there are
a large number of snapshots in the repository and _all snapshots
were requested, the performance degraded exponentially. This
commit fixes the issue by only calling getRepositoryData once and
using the data from it in all subsequent calls to get snapshot
information.
Closes#24509
When starting a new replication group in an index level replication test
case, a started replica would not have a valid recovery state. This
violates simple assumptions as replicas always have to have recovered
before being started. This commit causes this to be the case that this
assumption is not violated too.
We previously removed this assertion because it could be violated in
races. This commit adds this assertion back with sampling done more
carefully to avoid failures solely due to race conditions.
When multiple bootstrap checks fail, it's not clear where one error
message begins and the next error message ends. This commit numbers the
bootstrap check error messages so they are easier to read.
Relates #24548
This starts breaking up the `UpdateHelper.prepare` method so that each piece can
be individually unit tested. No actual functionality has changed.
Note however, that I did add a TODO about `ctx.op` leniency, which I'd love to
remove as a separate PR if desired.
This commit fixes a bug in the cache expire after access
implementation. The bug is this: if you construct a cache with an expire
after access of T, put a key, and then touch the key at some time t > T,
the act of getting the key would update the access time for the entry
before checking if the entry was expired. There are situations in which
expire after access would be honored (e.g., if the cache needs to prune
the LRU list to keep the cache under a certain weight, or a manual
refresh was called) but this behavior is otherwise broken.
Relates #24546
In order to make MockLogAppender (utility to test logging) available outside
of es-core move MockLogAppender from test core-tests to test framework. As
package names do not change, no need to change clients.
Today when opening the engine we skip gaps in the history, advancing the
local checkpoint until it is equal to the maximum sequence number
contained in the commit. This allows history to advance, but it leaves
gaps. A previous change filled these gaps when recovering from store,
but since we were skipping the gaps while opening the engine, this
change had no effect. This commit removes the gap skipping when opening
the engine allowing the gap filling to do its job.
Relates #24535
I stumbled on this code today and I hated it; I wrote it. I did not like
that you could not tell at a glance whether or not the method parameters
were correct. This commit fixes it.
Relates #24522
This commit changes the Terms.Bucket abstract class to an interface, so
that it's easier for the Java High Level Rest Client to provide its own
implementation.
In its current state, the Terms.Bucket abstract class inherits from
InternalMultiBucketAggregation.InternalBucket which forces subclasses to
implement Writeable and exposes a public getProperty() method that relies
on InternalAggregation. This two points make it difficult for the Java
High Level Rest Client to implement the Terms and Terms.Bucket correctly.
This is also different from other MultiBucketsAggregation like Range
which are pure interfaces.
Changing Terms.Bucket to an interface causes a method clashes for the
`getBuckets()` method in InternalTerms. This is because:
- InternalTerms implements Terms which declared a
`List<Terms.Bucket> getBuckets()` method
- InternalTerms extends InternalMultiBucketAggregation which declares a
`List<? extends InternalBucket> getBuckets()` method
- both overrides the MultiBucketsAggregation
`List<? extends Bucket> getBuckets()` method
There was no clashes before this change because Terms.Bucket extends
InternalBucket and conformed to both declaration. With Terms.Bucket now
an interface, the getBuckets() method in the Terms interface is changed
to avoid method clash. This is a breaking change in the Java API but
it's a straightforward change and the Terms multi bucket aggregation
interface is also more coherent with the other Range, Histogram,
Filters, AdjacencyMatrix etc that all return a `List<? extends Bucket>`.
This commit adds support for indexing and searching a new ip_range field type. Both IPv4 and IPv6 formats are supported. Tests are updated and docs are added.
This change will expand the shard level request to the actual concrete index or to the aliases
that expanded to the concrete index to ensure shard level requests won't see wildcard expressions as their original indices
If a field caps request contains a field name that doesn't exist in all indices
the response will be partial and we hide an NPE. The NPE is now fixed but we still
have the problem that we don't pass on errors on the shard level to the user. This will
be fixed in a followup.
`_search_shards`API today only returns aliases names if there is an alias
filter associated with one of them. Now it can be useful to see which aliases
have been expanded for an index given the index expressions. This change also includes non-filtering aliases even without a filtering alias being present.
Changes the scope of the AllocationService dependency injection hack so that it is at least contained to the AllocationService and does not leak into the Discovery world.
Today we go to heroic lengths to workaround bugs in the JDK or around
issues like BSD jails to get information about the underlying file
store. For example, we went to lengths to work around a JDK bug where
the file store returned would incorrectly report whether or not a path
is writable in certain situations in Windows operating
systems. Another bug prevented getting file store information on
Windows on a virtual drive on Windows. We no longer need to work
around these bugs, we could simply try to write to disk and let an I/O
exception arise if we could not write to the disk or take advantage of
the fact that these bugs are fixed in recent releases of the JDK
(e.g., the file store bug is fixed since 8u72). Additionally, we
collected information about all file stores on the system which meant
that if the user had a stale NFS mount, Elasticsearch could hang and
fail on startup if that mount point was not available. Finally, we
collected information through Lucene about whether or not a disk was a
spinning disk versus an SSD, information that we do not need since we
assume SSDs by default. This commit takes into consideration that we
simply do not need this heroic effort, we do not need information
about all file stores, and we do not need information about whether or
not a disk spins to greatly simplfy file store handling.
Relates #24402
This commit fixes the reschedule async fsync test in index service
tests. This test was passing for the wrong reason. Namely, the test was
trying to set translog durability to async, fire off an indexing
request, and then assert that the translog eventually got fsynced. The
problem here is that in the course of issuing the indexing request, a
mapping update is trigger. The mapping update triggers the index
settings to be refreshed. Since the test did not issue a cluster state
update to change the durability from request to async but instead did
this directly through index service, the mapping update flops the
durability back to async. This means that when the indexing request
executes, an fsync is performed after the request and the assertoin that
an fsync is not needed passes but for the wrong reason (in short: the
test wanted it to pass because an async fsync fired, but instead it
passed because a request async fired). This commit fixes this by going
through the index settings API so that a proper cluster state update is
triggered and so the mapping update does not flop the durability back to
request.
We are still chasing a test failure here and increasing the logging
level stopped the failures. We have a theory as to what is going on so
this commit reduces the logging level to hopefully trigger the failure
again and give us the logging that we need to confirm the theory.
This is a follow-up to #24317, which did the hard work but was merged in such a
way that it exposes the setting while still allowing indices to have multiple
types by default in order to give time to people who test against master to
add this setting to their index settings.
This test started failing but the logging here is insufficient to
discern what is happening. This commit increases the logging level on
this test until the failure can be understood.
The UpgraderPlugin adds two additional extension points called during cluster upgrade and when old indices are introduced into the cluster state during initial recovery, restore from a snapshot or as a dangling index. One extension point allows plugin to update old templates and another extension points allows to update/check index metadata.
Currently the only implementation of `ListenableActionFuture` requires
dispatching listener execution to a thread pool. This commit renames
that variant to `DispatchingListenableActionFuture` and converts
`AbstractListenableActionFuture` to be a variant that does not require
dispatching. That class is now named `PlainListenableActionFuture`.
The DiscoveryNodesProvider interface provides an unnecessary abstraction and is just used in conjunction with the existing PingContextProvider interface. This commit removes it.
Open/Close index api have allow_no_indices set to false by default, while delete index has it set to true. The flag controls where a wildcard expression that matches no indices will be ignored or an error will be thrown instead. This commit aligns open/close default behaviour to that of delete index.
This field was only ever used in the constructor, where it was set and
then passed around. As such, there's no need for it to be a field and we
can remove it.
After a replica shard finishes recovery, it will be marked as active and
its local checkpoint will be considered in the calculation of the global
checkpoint on the primary. If there were operations in flight during
recovery, when the replica is activated its local checkpoint could be
lagging behind the global checkpoint on the primary. This means that
when the replica shard is activated, we can end up in a situtaion where
a global checkpoint update would want to move the global checkpoint
backwards, violating an invariant of the system. This only arises if a
background global checkpoint sync executes, which today is only a
scheduled operation and might be delayed until the in-flight operations
complete and the replica catches up to the primary. Yet, we are going to
move to inlining global checkpoints which will cause this issue to be
more likely to manifest. Additionally, the global checkpoint on the
replica, which is the local knowledge on the replica updated under the
mandate of the primary, could be higher than the local checkpoint on the
replica, again violating an invariant of the system. This commit
addresses these issues by blocking global checkpoint on the primary when
a replica shard is finalizing recovery. While we have blocked global
checkpoint advancement, recovery on the replica shard will not be
considered complete until its local checkpoint advances to the blocked
global checkpoint.
Relates #24404
today we only lookup nodes by their ID but never by the (clusterAlias, nodeId) tuple.
This could in theory lead to lookups on the wrong cluster if there are 2 clusters
with a node that has the same ID. It's very unlikely to happen but we now can clearly
disambiguate between clusters and their nodes.
This change makes the request builder code-path same as `Client#execute`. The request builder used to return a `ListenableActionFuture` when calling execute, which allows to associate listeners with the returned future. For async execution though it is recommended to use the `execute` method that accepts an `ActionListener`, like users would do when using `Client#execute`.
Relates to #24412
Relates to #9201
SearchResponseSections is the common part extracted from InternalSearchResponse that can be shared between high level REST and elasticsearch. The only bits left out are around serialization which is not supported. This way it can accept Aggregations as a constructor argument, without requiring InternalAggregations, as the high level REST client uses its own objects for aggs parsing rather than internal ones.
This change also makes Aggregations implement ToXContent, and Aggregation extend ToXContent. Especially the latter is suboptimal but the best solution that allows to share as much code as possible between core and the client, that doesn't require messing with generics and making the api complicated. Also it doesn't have downsides as all of the current implementations of Aggregation do implement ToXContent already.
Async shard fetching only uses the node id to correlate responses to requests. This can lead to a situation where a response from an earlier request is mistaken as response from a new request when a node is restarted. This commit adds unique round ids to correlate responses to requests.
We often want the JVM arguments used for a running instance of
Elasticsearch. It sure would be nice if these came as part of the nodes
API, or any API that includes JVM info. This commit causes these
arguments to be displayed.
Relates #24450
These can be very useful to have, let's have them at our fingertips in
the logs (in case the JVM dies and we can no longer retrieve them, at
least they are here).
Relates #24451