The tribe service can take a while to initialize, depending on how many cluster it needs to connect to. This change moves writing the ports file used by tests to before the tribe service is started.
This adds the `index.mapping.single_type` setting, which enforces that indices
have at most one type when it is true. The default value is true for 6.0+ indices
and false for old indices.
Relates #15613
The one argument ctor for `Script` creates a script with the
default language but most usages of are for testing and either
don't care about the language or are for use with
`MockScriptEngine`. This replaces most usages of the one argument
ctor on `Script` with calls to `ESTestCase#mockScript` to make
it clear that the tests don't need the default scripting language.
I've also factored out some copy and pasted script generation
code into a single place. I would have had to change that code
to use `mockScript` anyway, so it was easier to perform the
refactor.
Relates to #16314
In case of a Cross Cluster Search, the coordinating node should split the original indices per cluster, and send over to each cluster only its own set of original indices, rather than the set taken from the original search request which contains all the indices.
In fact, each remote cluster should not be aware of the indices belonging to other remote clusters.
This test can run into a split-brain situation as minimum_master_nodes is not properly set. To prevent this, make sure that at least one of the two
master nodes that are initially started has minimum_master_nodes correctly set.
When an index is shrunk using the shrink APIs, the shrink operation adds
some internal index settings to the shrink index, for example
`index.shrink.source.name|uuid` to denote the source index, as well as
`index.routing.allocation.initial_recovery._id` to denote the node on
which all shards for the source index resided when the shrunken index
was created. However, this presents a problem when taking a snapshot of
the shrunken index and restoring it to a cluster where the initial
recovery node is not present, or restoring to the same cluster where the
initial recovery node is offline or decomissioned. The restore
operation fails to allocate the shard in the shrunken index to a node
when the initial recovery node is not present, and a restore type of
recovery will *not* go through the PrimaryShardAllocator, meaning that
it will not have the chance to force allocate the primary to a node in
the cluster. Rather, restore initiated shard allocation goes through
the BalancedShardAllocator which does not attempt to force allocate a
primary.
This commit fixes the aforementioned problem by not requiring allocation
to occur on the initial recovery node when the recovery type is a
restore of a snapshot. This commit also ensures that the internal
shrink index settings are recognized and not archived (which can trip an
assertion in the restore scenario).
Closes#24257
This code removes a few lines of dead code from ScriptedMetricAggregationBuilder.
Just completely dead code, it adds things to a Set that is then not used in any way.
Another step down the road to dropping the
lucene-analyzers-common dependency from core.
Note that this removes some tests that no longer compile from
core. I played around with adding them to the analysis-common
module where they would compile but we already test these in
the tests generated from the example usage in the documentation.
I'm not super happy with the way that `requriesAnalysisSettings`
works with regards to plugins. I think it'd be fairly bug-prone
for plugin authors to use. But I'm making it visible as is for
now and I'll rethink later.
A part of #23658
Currently InternalPercentilesBucket#percentile() relies on the percent array passed in
to be in sorted order. This changes the aggregation to store an internal lookup table that
is constructed from the percent/percentiles arrays passed in that can be used to look up
the percentile values.
Closes#24331
In case of a bridge partition, shard allocation can fail "index.allocation.max_retries" times if the master is the super-connected node and recovery
source and target are on opposite sides of the bridge. This commit adds a reroute with retry_failed after healing the network partition so that the
ensureGreen check succeeds.
StreamInput has methods such as readVInt that perform sanity checks on the data using assertions,
which will catch bad data in tests but provide no safety when running as a node without assertions
enabled. The use of assertions also make testing with invalid data difficult since we would need
to handle assertion errors in the code using the stream input and errors like this should not be
something we try to catch. This commit introduces a flag that will throw an IOException instead of
using an assertion.
The percolator doesn't close the IndexReader of the memory index any more.
Prior to 2.x the percolator had its own SearchContext (PercolatorContext) that did this,
but that was removed when the percolator was refactored as part of the 5.0 release.
I think an alternative way to fix this is to let percolator not use the bitset and fielddata caches,
that way we prevent the memory leak.
Closes#24108
This commit replaces two alternating regular expressions (that is,
regular expressions that consist of the form a|b where a and b are
characters) with the equivalent regular expression rewritten as a
character class (that is, [ab]) The reason this is an improvement is
because a|b involves backtracking while [ab] does not.
Relates #24316
This commit adds a compileTemplate method to the ScriptService.
Eventually this will be used to easily cutover all consumers to a new
TemplateService.
relates #16314
The `count` value in the stats aggregation represents a simple doc count
that doesn't require a formatted version. We didn't render an "as_string"
version for count in the rest response, so the method should also be
removed in favour of just using String.valueOf(getCount()) if a string
version of the count is needed.
Closes#24287
There was a bug in the calculation of the shards that a snapshot must
wait on, due to their relocating or initializing, before the snapshot
can proceed safely to snapshot the shard data. In this bug, an
incorrect key was used to look up the index of the waiting shards,
resulting in the fact that each index would have at most one shard in
the waiting state causing the snapshot to pause. This could be
problematic if there are more than one shard in the relocating or
initializing state, which would result in a snapshot prematurely
starting because it thinks its only waiting on one relocating or
initializing shard (when in fact there could be more than one). While
not a common case and likely rare in practice, it is still problematic.
This commit fixes the issue by ensuring the correct key is used to look
up the waiting indices map as it is being built up, so the list of
waiting shards for each index (those shards that are relocating or
initializing) are aggregated for a given index instead of overwritten.
If the user explicitly configured path.data to include
default.path.data, then we should not fail the node if we find indices
in default.path.data. This commit addresses this.
Relates #24285
This commit fixes the hash code for AliasFilter as the previous
implementation was neglecting to take into consideration the fact that
the aliases field is an array and thus a deep hash code of it should be
computed rather than a shallow hash code on the reference.
Relates #24286
The tribe was being shutdown by the test while a publishing round (that adds the tribe node to a cluster) is not completed yet (i.e. the node itself
knows that it became part of the cluster, and the test shuts the tribe node down, but another node has not applied the cluster state yet, which makes
that node hang while trying to connect to the node that is shutting down (due to connect_timeout being 30 seconds), delaying publishing for 30
seconds, and subsequently tripping an assertion when another tribe instance wants to join.
Relates to #23695
When parsing StoredSearchScript we were adding a Content type option that was forbidden (by a check that threw an exception) by the parser thats used to parse the template when we read it from the cluster state. This was stopping Elastisearch from starting after stored search templates had been added.
This change no longer adds the content type option to the StoredScriptSource object when parsing from the put search template request. This is safe because the StoredScriptSource content is always JSON when its stored in the cluster state since we do a conversion to JSON before this point.
Also removes the check for the content type in the options when parsing StoredScriptSource so users who already have stored scripts can start Elasticsearch.
Closes#24227
ScriptService has two executable methods, one which takes a
CompiledScript, which is similar to search, and one that takes a raw
Script and both compiles and returns an ExecutableScript for it. The
latter is not needed, and the call sites which used one or the other
were mixed. This commit removes the extra executable method in favor of
callers first calling compile, then executable.
The unwrap method was leftover from support javascript and python. Since
those languages are removed in 6.0, this commit removes the unwrap
feature from scripts.
Before #22488 when an index couldn't be created during a `_bulk`
operation we'd do all the *other* actions and return the index
creation error on each failing action. In #22488 we accidentally
changed it so that we now reject the entire bulk request if a single
action cannot create an index that it must create to run. This
gets reverts to the old behavior while still keeping the nicer
error messages. Instead of failing the entire request we now only
fail the portions of the request that can't work because the index
doesn't exist.
Closes#24028
Today when removing a plugin, we attempt to move the plugin directory to
a temporary directory and then delete that directory from the
filesystem. We do this to avoid a plugin being in a half-removed
state. We previously tried an atomic move, and fell back to a non-atomic
move if that failed. Atomic moves can fail on union filesystems when the
plugin directory is not in the top layer of the
filesystem. Interestingly, the regular move can fail as well. This is
because when the JDK is executing such a move, it first tries to rename
the source directory to the target directory and if this fails with
EXDEV (as in the case of an atomic move failing), it falls back to
copying the source to the target, and then attempts to rmdir the
source. The bug here is that the JDK never deleted the contents of the
source so the rmdir will always fail (except in the case of an empty
directory).
Given all this silliness, we were inspired to find a different
strategy. The strategy is simple. We will add a marker file to the
plugin directory that indicates the plugin is in a state of
removal. This file will be the last file out the door during removal. If
this file exists during startup, we fail startup.
Relates #24252
Today we might promote a primary and recover from store where after translog
recovery the local checkpoint is still behind the maximum sequence ID seen.
To fill the holes in the sequence ID history this PR adds a utility method
that fills up all missing sequence IDs up to the maximum seen sequence ID
with no-ops.
Relates to #10708
The plugin cli currently resides inside the elasticsearch jar. This
commit moves it into a plugin-cli jar. This is change alone is a no-op;
it does not change anything about what is loaded at runtime. But it will
allow easier testing (with fixtures in the future to test ES or maven
installation), as well as eventually not loading these classes when
starting elasticsearch.
This commit adds a XContentParserUtils.parseTypedKeysObject() method
that can be used to parse named XContent objects identified by a field
name containing a type identifier, a delimiter and the name of the
object to parse.
The MultiBucketsAggregation.Bucket interface extends Writeable, forcing
all implementation classes to implement writeTo(). This commit removes
the Writeable from the interface and move it down to the InternalBucket
implementation.
Changes in #24102 exposed the following oddity: PrioritizedEsThreadPoolExecutor.getPending() can return Pending entries where pending.task == null. This can happen for example when tasks are added to the pending list while they are in the clean up phase, i.e. TieBreakingPrioritizedRunnable#runAndClean has run already, but afterExecute has not removed the task yet. Instead of safeguarding consumers of the API (as was done before #24102) this changes the executor to not count these tasks as pending at all.
Currently we don't test for count = 0 which will make a difference when adding
tests for parsing for the high level rest client. Also min/max/sum should also
be tested with negative values and on a larger range.
The addition of the normalization feature on keywords slowed down the parsing
of large `terms` queries since all terms now have to go through normalization.
However this can be avoided in the default case that the analyzer is a
`keyword` analyzer since all that normalization will do is a UTF8 conversion.
Using `Analyzer.normalize` for that is a bit overkill and could be skipped.
Add option "enable_position_increments" with default value true.
If option is set to false, indexed value is the number of tokens
(not position increments count)
Currently any `query_string` query that use a wildcard field with no matching field is rewritten with the `_all` field.
For instance:
````
#creating test doc
PUT testing/t/1
{
"test": {
"field_one": "hello",
"field_two": "world"
}
}
#searching abc.* (does not exist) -> hit
GET testing/t/_search
{
"query": {
"query_string": {
"fields": [
"abc.*"
],
"query": "hello"
}
}
}
````
This bug first appeared in 5.0 after the query refactoring and impacts only users that use `_all` as default field.
Indices created in 6.x will not have this problem since `_all` is deactivated in this version.
This change fixes this bug by returning a MatchNoDocsQuery for any term that expand to an empty list of field.
Some of the base methods that don't have to do with reduce phase and serialization can be moved to the base class which is no longer an interface. This will be reusable by the high level REST client further on the road. Also it simplify things as having an interface with a single implementor is not that helpful.