TransportService and RemoteClusterService are closely coupled already today
and to simplify remote cluster integration down the road it can be a direct
dependency of TransportService. This change moves RemoteClusterService into
TransportService with the goal to make it a hidden implementation detail
of TransportService in followup changes.
RemoteClusterService is an internal service that should not necessarily be exposed
to plugins or other parts of the system. Yet, for cluster name parsing for instance
it is crucial to reuse some code that is used for the RemoteClusterService. This
change extracts a base class that allows to share the settings related code as well
as cluster settings updates to `search.remote.*` to be observed by other services.
* Fixes#24259
Corrects the ScriptedMetricAggregator so that the script can have
access to scores during the map stage.
* Restored original tests. Added seperate test.
As requested, I've restored the non-score dependant tests, and added the
score dependent metric as a seperate test.
In #22267, we introduced the notion of incompatible snapshots in a
repository, and they were stored in a root-level blob named
`incompatible-snapshots`. If there were no incompatible snapshots in
the repository, then there was no `incompatible-snapshots` blob.
However, this causes a problem for some cloud-based repositories,
because if the blob does not exist, the cloud-based repositories may
attempt to keep retrying the read of a non-existent blob with
expontential backoff until giving up. This causes performance issues
(and potential timeouts) on snapshot operations because getting the
`incompatible-snapshots` is part of getting the repository data (see
RepositoryData#getRepositoryData()).
This commit fixes the issue by creating an empty
`incompatible-snapshots` blob in the repository if one does not exist.
This method has to do with how the transport action may or may not resolve wildcards expressions to aliases names. It is only needed in TransportIndicesAliasesAction and for this reason it should be a private method in it rather than part of a request class which is also part of the Java API and later in the high level REST client.
This commit cleans up some cases where a list or map was being
constructed, and then an existing collection was copied into the new
collection. The clean is to instead use an appropriate constructor to
directly copy the existing collection in during collection
construction. The advantage of this is that the new collection is sized
appropriately.
Relates #24409
With #24236, the master now uses the pending queue when publishing to itself. This means that a cluster state update is put into the pending queue,
then sent to the ClusterApplierService to be applied. After it has been applied, it is marked as processed and removed from the pending queue.
ensureGreen is implemented as a cluster health action that waits on certain conditions, which will lead to a cluster state update task to be submitted
on the master. When this task gets to run and the conditions are not satisfied yet, it register a cluster state observer. This observer is registered
on the ClusterApplierService and waits on cluster state change events. ClusterApplierService first notifies the observer and then the discovery
layer. This means that there is a small time frame where ensureGreen can complete and call the node stats to find the pending queue still containing
the last cluster state update.
Closes#24388
Failure detection should only be updated in ZenDiscovery after the current state has been updated to prevent a race condition
with handleLeaveRequest and handleNodeFailure as those check the current state to determine whether the failure is to be handled by this node.
Open/close index API when executed providing an index expressions that matched no indices, threw an error even when allow_no_indices was set to true. The APIs should rather honour the option and behave as a no-op in that case.
Closes#24031
Currently we don't write the count value to the geo_centroid aggregation rest response,
but it is provided via the java api and the count() method in the GeoCentroid interface.
We should add this parameter to the rest output and also provide it via the getProperty()
method.
Eclipse doesn't allow extra semicolons after an import statement:
```
import foo.Bar;; // <-- syntax error!
```
Here is the Eclipse bug:
https://bugs.eclipse.org/bugs/show_bug.cgi?id=425140
which the Eclipse folks closed as "the spec doesn't allow these
semicolons so why should we?" Which is fair. Here is the bug
against javac for allowing them:
https://bugs.openjdk.java.net/browse/JDK-8027682
which hasn't been touched since 2013 without explanation. There
is, however, a rather educations mailing list thread:
http://mail.openjdk.java.net/pipermail/compiler-dev/2013-August/006956.html
which contains gems like, "In general, it is better/simpler to
change javac to conform to the spec. (Except when it is not.)"
I suspect the reason this hasn't been fixed is:
```
FWIW, if we change javac such that the set of programs accepted by javac
is changed, we have an process (currently Oracle internal) to get
approval for such a change. So, we would not simply change javac on a
whim to meet the spec; we would at least have other eyes looking at the
behavioral change to determine if it is "acceptable".
```
from http://mail.openjdk.java.net/pipermail/compiler-dev/2013-August/006973.html
The alias parameter was documented as a list in our rest-spec, yet only the first value out of a list was getting read and processed. This commit adds support for multiple aliases to _cat/aliases
Closes#23661
The version on an update request is a syntactic sugar
for get of a specific version, doc merge and a version
index. This changes it to reject requests with both
upsert and a version.
If the upsert index request is versioned, we also
reject the op.
The previous commit (35f78d098a) introduced an assertion in ZenDiscovery that was overly restrictive - it could trip when a cluster state that was
successfully published would not be applied locally because a master with a better cluster state came along in the meantime.
Separates cluster state publishing from applying cluster states:
- ClusterService is split into two classes MasterService and ClusterApplierService. MasterService has the responsibility to calculate cluster state updates for actions that want to change the cluster state (create index, update shard routing table, etc.). ClusterApplierService has the responsibility to apply cluster states that have been successfully published and invokes the cluster state appliers and listeners.
- ClusterApplierService keeps track of the last applied state, but MasterService is stateless and uses the last cluster state that is provided by the discovery module to calculate the next prospective state. The ClusterService class is still kept around, which now just delegates actions to ClusterApplierService and MasterService.
- The discovery implementation is now responsible for managing the last cluster state that is used by the consensus layer and the master service. It also exposes the initial cluster state which is used by the ClusterApplierService. The discovery implementation is also responsible for adding the right cluster-level blocks to the initial state.
- NoneDiscovery has been renamed to TribeDiscovery as it is exclusively used by TribeService. It adds the tribe blocks to the initial state.
- ZenDiscovery is synchronized on state changes to the last cluster state that is used by the consensus layer and the master service, and does not submit cluster state update tasks anymore to make changes to the disco state (except when becoming master).
Control flow for cluster state updates is now as follows:
- State updates are sent to MasterService
- MasterService gets the latest committed cluster state from the discovery implementation and calculates the next cluster state to publish
- MasterService submits the new prospective cluster state to the discovery implementation for publishing
- Discovery implementation publishes cluster states to all nodes and, once the state is committed, asks the ClusterApplierService to apply the newly committed state.
- ClusterApplierService applies state to local node.
- Getters for DateHisto `interval` and `offset` should return a
long, not double
- Add getter for the filter in a FilterAgg
- Add getters for subaggs / pipelines in base AggregationBuilder
Allow the `Context` to be used in the builder function used within ConstructingObjectParser.
This facilitates scenarios where a constructor argument comes from a URL parameter, or from document id.
The tribe service can take a while to initialize, depending on how many cluster it needs to connect to. This change moves writing the ports file used by tests to before the tribe service is started.
This adds the `index.mapping.single_type` setting, which enforces that indices
have at most one type when it is true. The default value is true for 6.0+ indices
and false for old indices.
Relates #15613
The one argument ctor for `Script` creates a script with the
default language but most usages of are for testing and either
don't care about the language or are for use with
`MockScriptEngine`. This replaces most usages of the one argument
ctor on `Script` with calls to `ESTestCase#mockScript` to make
it clear that the tests don't need the default scripting language.
I've also factored out some copy and pasted script generation
code into a single place. I would have had to change that code
to use `mockScript` anyway, so it was easier to perform the
refactor.
Relates to #16314
In case of a Cross Cluster Search, the coordinating node should split the original indices per cluster, and send over to each cluster only its own set of original indices, rather than the set taken from the original search request which contains all the indices.
In fact, each remote cluster should not be aware of the indices belonging to other remote clusters.
This test can run into a split-brain situation as minimum_master_nodes is not properly set. To prevent this, make sure that at least one of the two
master nodes that are initially started has minimum_master_nodes correctly set.
When an index is shrunk using the shrink APIs, the shrink operation adds
some internal index settings to the shrink index, for example
`index.shrink.source.name|uuid` to denote the source index, as well as
`index.routing.allocation.initial_recovery._id` to denote the node on
which all shards for the source index resided when the shrunken index
was created. However, this presents a problem when taking a snapshot of
the shrunken index and restoring it to a cluster where the initial
recovery node is not present, or restoring to the same cluster where the
initial recovery node is offline or decomissioned. The restore
operation fails to allocate the shard in the shrunken index to a node
when the initial recovery node is not present, and a restore type of
recovery will *not* go through the PrimaryShardAllocator, meaning that
it will not have the chance to force allocate the primary to a node in
the cluster. Rather, restore initiated shard allocation goes through
the BalancedShardAllocator which does not attempt to force allocate a
primary.
This commit fixes the aforementioned problem by not requiring allocation
to occur on the initial recovery node when the recovery type is a
restore of a snapshot. This commit also ensures that the internal
shrink index settings are recognized and not archived (which can trip an
assertion in the restore scenario).
Closes#24257
This code removes a few lines of dead code from ScriptedMetricAggregationBuilder.
Just completely dead code, it adds things to a Set that is then not used in any way.
Another step down the road to dropping the
lucene-analyzers-common dependency from core.
Note that this removes some tests that no longer compile from
core. I played around with adding them to the analysis-common
module where they would compile but we already test these in
the tests generated from the example usage in the documentation.
I'm not super happy with the way that `requriesAnalysisSettings`
works with regards to plugins. I think it'd be fairly bug-prone
for plugin authors to use. But I'm making it visible as is for
now and I'll rethink later.
A part of #23658
Currently InternalPercentilesBucket#percentile() relies on the percent array passed in
to be in sorted order. This changes the aggregation to store an internal lookup table that
is constructed from the percent/percentiles arrays passed in that can be used to look up
the percentile values.
Closes#24331
In case of a bridge partition, shard allocation can fail "index.allocation.max_retries" times if the master is the super-connected node and recovery
source and target are on opposite sides of the bridge. This commit adds a reroute with retry_failed after healing the network partition so that the
ensureGreen check succeeds.
StreamInput has methods such as readVInt that perform sanity checks on the data using assertions,
which will catch bad data in tests but provide no safety when running as a node without assertions
enabled. The use of assertions also make testing with invalid data difficult since we would need
to handle assertion errors in the code using the stream input and errors like this should not be
something we try to catch. This commit introduces a flag that will throw an IOException instead of
using an assertion.
The percolator doesn't close the IndexReader of the memory index any more.
Prior to 2.x the percolator had its own SearchContext (PercolatorContext) that did this,
but that was removed when the percolator was refactored as part of the 5.0 release.
I think an alternative way to fix this is to let percolator not use the bitset and fielddata caches,
that way we prevent the memory leak.
Closes#24108
This commit replaces two alternating regular expressions (that is,
regular expressions that consist of the form a|b where a and b are
characters) with the equivalent regular expression rewritten as a
character class (that is, [ab]) The reason this is an improvement is
because a|b involves backtracking while [ab] does not.
Relates #24316
This commit adds a compileTemplate method to the ScriptService.
Eventually this will be used to easily cutover all consumers to a new
TemplateService.
relates #16314
The `count` value in the stats aggregation represents a simple doc count
that doesn't require a formatted version. We didn't render an "as_string"
version for count in the rest response, so the method should also be
removed in favour of just using String.valueOf(getCount()) if a string
version of the count is needed.
Closes#24287
There was a bug in the calculation of the shards that a snapshot must
wait on, due to their relocating or initializing, before the snapshot
can proceed safely to snapshot the shard data. In this bug, an
incorrect key was used to look up the index of the waiting shards,
resulting in the fact that each index would have at most one shard in
the waiting state causing the snapshot to pause. This could be
problematic if there are more than one shard in the relocating or
initializing state, which would result in a snapshot prematurely
starting because it thinks its only waiting on one relocating or
initializing shard (when in fact there could be more than one). While
not a common case and likely rare in practice, it is still problematic.
This commit fixes the issue by ensuring the correct key is used to look
up the waiting indices map as it is being built up, so the list of
waiting shards for each index (those shards that are relocating or
initializing) are aggregated for a given index instead of overwritten.
If the user explicitly configured path.data to include
default.path.data, then we should not fail the node if we find indices
in default.path.data. This commit addresses this.
Relates #24285
This commit fixes the hash code for AliasFilter as the previous
implementation was neglecting to take into consideration the fact that
the aliases field is an array and thus a deep hash code of it should be
computed rather than a shallow hash code on the reference.
Relates #24286
The tribe was being shutdown by the test while a publishing round (that adds the tribe node to a cluster) is not completed yet (i.e. the node itself
knows that it became part of the cluster, and the test shuts the tribe node down, but another node has not applied the cluster state yet, which makes
that node hang while trying to connect to the node that is shutting down (due to connect_timeout being 30 seconds), delaying publishing for 30
seconds, and subsequently tripping an assertion when another tribe instance wants to join.
Relates to #23695
When parsing StoredSearchScript we were adding a Content type option that was forbidden (by a check that threw an exception) by the parser thats used to parse the template when we read it from the cluster state. This was stopping Elastisearch from starting after stored search templates had been added.
This change no longer adds the content type option to the StoredScriptSource object when parsing from the put search template request. This is safe because the StoredScriptSource content is always JSON when its stored in the cluster state since we do a conversion to JSON before this point.
Also removes the check for the content type in the options when parsing StoredScriptSource so users who already have stored scripts can start Elasticsearch.
Closes#24227
ScriptService has two executable methods, one which takes a
CompiledScript, which is similar to search, and one that takes a raw
Script and both compiles and returns an ExecutableScript for it. The
latter is not needed, and the call sites which used one or the other
were mixed. This commit removes the extra executable method in favor of
callers first calling compile, then executable.
The unwrap method was leftover from support javascript and python. Since
those languages are removed in 6.0, this commit removes the unwrap
feature from scripts.
Before #22488 when an index couldn't be created during a `_bulk`
operation we'd do all the *other* actions and return the index
creation error on each failing action. In #22488 we accidentally
changed it so that we now reject the entire bulk request if a single
action cannot create an index that it must create to run. This
gets reverts to the old behavior while still keeping the nicer
error messages. Instead of failing the entire request we now only
fail the portions of the request that can't work because the index
doesn't exist.
Closes#24028
Today when removing a plugin, we attempt to move the plugin directory to
a temporary directory and then delete that directory from the
filesystem. We do this to avoid a plugin being in a half-removed
state. We previously tried an atomic move, and fell back to a non-atomic
move if that failed. Atomic moves can fail on union filesystems when the
plugin directory is not in the top layer of the
filesystem. Interestingly, the regular move can fail as well. This is
because when the JDK is executing such a move, it first tries to rename
the source directory to the target directory and if this fails with
EXDEV (as in the case of an atomic move failing), it falls back to
copying the source to the target, and then attempts to rmdir the
source. The bug here is that the JDK never deleted the contents of the
source so the rmdir will always fail (except in the case of an empty
directory).
Given all this silliness, we were inspired to find a different
strategy. The strategy is simple. We will add a marker file to the
plugin directory that indicates the plugin is in a state of
removal. This file will be the last file out the door during removal. If
this file exists during startup, we fail startup.
Relates #24252
Today we might promote a primary and recover from store where after translog
recovery the local checkpoint is still behind the maximum sequence ID seen.
To fill the holes in the sequence ID history this PR adds a utility method
that fills up all missing sequence IDs up to the maximum seen sequence ID
with no-ops.
Relates to #10708
The plugin cli currently resides inside the elasticsearch jar. This
commit moves it into a plugin-cli jar. This is change alone is a no-op;
it does not change anything about what is loaded at runtime. But it will
allow easier testing (with fixtures in the future to test ES or maven
installation), as well as eventually not loading these classes when
starting elasticsearch.
This commit adds a XContentParserUtils.parseTypedKeysObject() method
that can be used to parse named XContent objects identified by a field
name containing a type identifier, a delimiter and the name of the
object to parse.
The MultiBucketsAggregation.Bucket interface extends Writeable, forcing
all implementation classes to implement writeTo(). This commit removes
the Writeable from the interface and move it down to the InternalBucket
implementation.
Changes in #24102 exposed the following oddity: PrioritizedEsThreadPoolExecutor.getPending() can return Pending entries where pending.task == null. This can happen for example when tasks are added to the pending list while they are in the clean up phase, i.e. TieBreakingPrioritizedRunnable#runAndClean has run already, but afterExecute has not removed the task yet. Instead of safeguarding consumers of the API (as was done before #24102) this changes the executor to not count these tasks as pending at all.
Currently we don't test for count = 0 which will make a difference when adding
tests for parsing for the high level rest client. Also min/max/sum should also
be tested with negative values and on a larger range.
The addition of the normalization feature on keywords slowed down the parsing
of large `terms` queries since all terms now have to go through normalization.
However this can be avoided in the default case that the analyzer is a
`keyword` analyzer since all that normalization will do is a UTF8 conversion.
Using `Analyzer.normalize` for that is a bit overkill and could be skipped.
Add option "enable_position_increments" with default value true.
If option is set to false, indexed value is the number of tokens
(not position increments count)
Currently any `query_string` query that use a wildcard field with no matching field is rewritten with the `_all` field.
For instance:
````
#creating test doc
PUT testing/t/1
{
"test": {
"field_one": "hello",
"field_two": "world"
}
}
#searching abc.* (does not exist) -> hit
GET testing/t/_search
{
"query": {
"query_string": {
"fields": [
"abc.*"
],
"query": "hello"
}
}
}
````
This bug first appeared in 5.0 after the query refactoring and impacts only users that use `_all` as default field.
Indices created in 6.x will not have this problem since `_all` is deactivated in this version.
This change fixes this bug by returning a MatchNoDocsQuery for any term that expand to an empty list of field.
Some of the base methods that don't have to do with reduce phase and serialization can be moved to the base class which is no longer an interface. This will be reusable by the high level REST client further on the road. Also it simplify things as having an interface with a single implementor is not that helpful.
In InternalAggregationTestCase, we can check that the internal aggregation and the parsed aggregation always produce the same XContent even if the original internal aggregation has been shuffled or not.
Start moving built in analysis components into the new analysis-common
module. The goal of this project is:
1. Remove core's dependency on lucene-analyzers-common.jar which should
shrink the dependencies for transport client and high level rest client.
2. Prove that analysis plugins can do all the "built in" things by moving all
"built in" behavior to a plugin.
3. Force tests not to depend on any oddball analyzer behavior. If tests
need anything more than the standard analyzer they can use the mock
analyzer provided by Lucene's test infrastructure.
This leniency was left in after plugin installer refactoring for 2.0
because some tests still relied on it. However, the need for this
leniency no longer exists.
Unlike other implementations of InternalNumericMetricsAggregation.SingleValue,
the InternalBucketMetricValue aggregation currently doesn't implement a
specialized interface that exposes the `keys()` method. This change adds this so
that clients can access the keys via the interface.
This change adds an index setting to define how the documents should be sorted inside each Segment.
It allows any numeric, date, boolean or keyword field inside a mapping to be used to sort the index on disk.
It is not allowed to use a `nested` fields inside an index that defines an index sorting since `nested` fields relies on the original sort of the index.
This change does not add early termination capabilities in the search layer. This will be added in a follow up.
Relates #6720
* Replicate write failures
Currently, when a primary write operation fails after generating
a sequence number, the failure is not communicated to the replicas.
Ideally, every operation which generates a sequence number on primary
should be recorded in all replicas.
In this change, a sequence number is associated with write operation
failure. When a failure with an assinged seqence number arrives at a
replica, the failure cause and sequence number is recorded in the translog
and the sequence number is marked as completed via executing `Engine.noOp`
on the replica engine.
* use zlong to serialize seq_no
* Incorporate feedback
* track write failures in translog as a noop in primary
* Add tests for replicating write failures.
Test that document failure (w/ seq no generated) are recorded
as no-op in the translog for primary and replica shards
* Update to master
* update shouldExecuteOnReplica comment
* rename indexshard noop to markSeqNoAsNoOp
* remove redundant conditional
* Consolidate possible replica action for bulk item request
depanding on it's primary execution
* remove bulk shard result abstraction
* fix failure handling logic for bwc
* add more tests
* minor fix
* cleanup
* incorporate feedback
* incorporate feedback
* add assert to remove handling noop primary response when 5.0 nodes are not supported
The `maxUnsafeAutoIdTimestamp` timestamp is a safety marker guaranteeing that no retried-indexing operation with a higher auto gen id timestamp was process by the engine. This allows us to safely process documents without checking if they were seen before.
Currently this property is maintained in memory and is handed off from the primary to any replica during the recovery process.
This commit takes a more natural approach and stores it in the lucene commit, using the same semantics (no retry op with a higher time stamp is part of this commit). This means that the knowledge is transferred during the file copy and also means that we don't need to worry about crazy situations where an original append only request arrives at the engine after a retry was processed *and* the engine was restarted.
Today when we merge hits we have a hard check to prevent AIOOB exceptions
that simply skips an expected search hit. This can only happen if there is a
bug in the code which should be turned into a hard exception or an assertion
triggered. This change adds an assertion an removes the lenient check for the
fetched hits.
Some aggregations (like Min, Max etc) use a wrong DocValueFormat in
tests (like IP or GeoHash). We should not test aggregations that expect
a numeric value with a DocValueFormat like IP. Such wrong DocValueFormat
can also prevent the aggregation to be rendered as ToXContent, and this
will be an issue for the High Level Rest Client tests which expect to be
able to parse back aggregations.
Now the Percentile interface has been merged with the InternalPercentile
class in core (#24154) the AbstractParsedPercentiles should use it.
This commit also changes InternalPercentilesRanksTestCase so that it now
tests the iterator obtained from parsed percentiles ranks aggregations.
Adding this new test raised an issue in the iterators where key and
value are "swapped" in internal implementations when building the
iterators (see InternalTDigestPercentileRanks.Iter constructor that
accepts the `keys` as the first parameter named `values`, each key
being mapped to the `value` field of Percentile class). This is because
percentiles ranks aggs inverts percentiles/values compared to the
percentiles aggs.
* Add assume in InternalAggregationTestCase
* Update after Luca review
We want to upgrade to Lucene 7 ahead of time in order to be able to check whether it causes any trouble to Elasticsearch before Lucene 7.0 gets released. From a user perspective, the main benefit of this upgrade is the enhanced support for sparse fields, whose resource consumption is now function of the number of docs that have a value rather than the total number of docs in the index.
Some notes about the change:
- it includes the deprecation of the `disable_coord` parameter of the `bool` and `common_terms` queries: Lucene has removed support for coord factors
- it includes the deprecation of the `index.similarity.base` expert setting, since it was only useful to configure coords and query norms, which have both been removed
- two tests have been marked with `@AwaitsFix` because of #23966, which we intend to address after the merge
Checks that IndicesClusterStateService stays consistent with incoming cluster states that contain no_master blocks (especially
discovery.zen.no_master_block=all which disables state persistence). In particular this checks that active shards which have no in-memory data
structures on a node are failed.
This changes the trace level logging to warn, and adds the needed number to the message as well.
My fear is that it may get noisy, but this is an issue that you want to be noisy.
When preparing the final settings in the environment, we unconditionally
set path.data even if path.data was not explicitly set. This confounds
detection for whether or not path.data was explicitly set, and this is
trappy. This commit adds logic to only set path.data in the final
settings if path.data was explicitly set, and provides a test case that
fails without this logic.
Relates #24132
Today when a flush is performed, the translog is committed and if there
are no outstanding views, only the current translog generation is
preserved. Yet for the purpose of sequence numbers, we need stronger
guarantees than this. This commit migrates the preservation of translog
generations to keep the minimum generation that would be needed to
recover after the local checkpoint.
Relates #24015
In Elasticsearch 5.3.0 a bug was introduced in the merging of default
settings when the target setting existed as an array. When this bug
concerns path.data and default.path.data, we ended up in a situation
where the paths specified in both settings would be used to write index
data. Since our packaging sets default.path.data, users that configure
multiple data paths via an array and use the packaging are subject to
having shards land in paths in default.path.data when that is very
likely not what they intended.
This commit is an attempt to rectify this situation. If path.data and
default.path.data are configured, we check for the presence of indices
there. If we find any, we log messages explaining the situation and fail
the node.
Relates #24099
The cat APIs and rest tables would obtain a stream from the RestChannel, which happened to be a
ReleasableBytesStreamOutput. These APIs used the stream to write content to, closed the stream,
and then tried to send a response. After #23941 was merged, closing the stream meant that the bytes
were released for use elsewhere. This caused occasional corruption of the response when the bytes
were used prior to the response being sent.
This commit changes these two usages to wrap the stream obtained from the channel in a flush on
close stream so that the bytes are still reserved until the message is sent.
Empty IDs are rejected during indexing, so we should not randomly
produce them during tests. This commit modifies the simple versioning
tests to no longer produce empty IDs.
When indexing a document via the bulk API where IDs can be explicitly
specified, we currently accept an empty ID. This is problematic because
such a document can not be obtained via the get API. Instead, we should
rejected these requets as accepting them could be a dangerous form of
leniency. Additionally, we already have a way of specifying
auto-generated IDs and that is to not explicitly specify an ID so we do
not need a second way. This commit rejects the individual requests where
ID is specified but empty.
Relates #24118
Internal indexing requests in Elasticsearch may be processed out of order and repeatedly. This is important during recovery and due to concurrency in replicating requests between primary and replicas. As such, a replica/recovering shard needs to be able to identify that an incoming request contains information that is old and thus need not be processed. The current logic is based on external version. This is sadly not sufficient. This PR moves the logic to rely on sequences numbers and primary terms which give the semantics we need.
Relates to #10708
When building headers for a REST response, we de-duplicate the warning
headers based on the actual warning value. The current implementation of
this uses a capturing regular expression that is prone to excessive
backtracking. In cases a request involves a large number of warnings,
this extraction can be a severe performance penalty. An example where
this can arise is a bulk indexing request that utilizes a deprecated
feature (e.g., using deprecated forms of boolean values). This commit is
an attempt to address this performance regression. We already know the
format of the warning header, so we do not need to use a regular
expression to parse it but rather can parse it by hand to extract the
warning value. This gains back the vast majority of the performance lost
due to the usage of a deprecated feature. There is still a performance
loss due to logging the deprecation message but we do not address that
concern in this commit.
Relates #24114
This commit makes closing a ReleasableBytesStreamOutput release the underlying BigArray so
that we can use try-with-resources with these streams and avoid leaking memory by not returning
the BigArray. As part of this change, the ReleasableBytesStreamOutput adds protection to only
release the BigArray once.
In order to make some of the changes cleaner, the ReleasableBytesStream interface has been
removed. The BytesStream interface is changed to a abstract class so that we can use it as a
useable return type for a new method, Streams#flushOnCloseStream. This new method wraps a
given stream and overrides the close method so that the stream is simply flushed and not closed.
This behavior is used in the TcpTransport when compression is used with a
ReleasableBytesStreamOutput as we need to close the compressed stream to ensure all of the data
is written from this stream. Closing the compressed stream will try to close the underlying stream
but we only want to flush so that all of the written bytes are available.
Additionally, an error message method added in the BytesRestResponse did not use a builder
provided by the channel and instead created its own JSON builder. This changes that method to use
the channel builder and in turn the bytes stream output that is managed by the channel.
Note, this commit differs from 6bfecdf921 in that it updates
ReleasableBytesStreamOutput to handle the case of the BigArray decreasing in size, which changes
the reference to the BigArray. When the reference is changed, the releasable needs to be updated
otherwise there could be a leak of bytes and corruption of data in unrelated streams.
This reverts commit afd45c1432, which reverted #23572.
There are test failures that suggest that the import of dangling indices is happening too early, before the dangling indices are ready to be consumed.
This commit adds an ensureGreen() at the end of cluster initialization to make sure that no cluster state updates are happening while the dangling
indices are prepared on-disk.
TaskInfo is stored as a part of TaskResult and therefore can be read by nodes with an older version. If we add any additional information to TaskInfo (for #23250, for example), nodes with an older version should be able to ignore it, otherwise they will not be able to read TaskResults stored by newer nodes.
This commit collapses the SyncBulkRequestHandler and
AsyncBulkRequestHandler into a single BulkRequestHandler. The new
handler executes a bulk request and awaits for the completion if the
BulkProcessor was configured with a concurrentRequests setting of 0.
Otherwise the execution happens asynchronously.
As part of this change the Retry class has been refactored.
withSyncBackoff and withAsyncBackoff have been replaced with two
versions of withBackoff. One method takes a listener that will be
called on completion. The other method returns a future that will been
complete on request completion.
Today Elasticsearch allows default settings to be used only if the
actual setting is not set. These settings are trappy, and the complexity
invites bugs. This commit removes support for default settings with the
exception of default.path.data, default.path.conf, and default.path.logs
which are maintainted to support packaging. A follow-up will remove
support for these as well.
Relates #24093
In Elasticsearch 5.3.0 a bug was introduced in the merging of default
settings when the target setting existed as an array. This arose due to
the fact that when a target setting is an array, the setting key is
broken into key.0, key.1, ..., key.n, one for each element of the
array. When settings are replaced by default.key, we are looking for the
target key but not the target key.0. This leads to key, and key.0, ...,
key.n being present in the constructed settings object. This commit
addresses two issues here. The first is that we fix the merging of the
keys so that when we try to merge default.key, we also check for the
presence of the flattened keys. The second is that when we try to get a
setting value as an array from a settings object, we check whether or
not the backing map contains the top-level key as well as the flattened
keys. This latter check would have caught the first bug. For kicks, we
add some tests.
Relates #24074
This new version of jna is rebuilt from the official release of jna, but
with native libs linked against older glibc in order to support all
platforms elasticsearch supports.
closes#23640
The "category" in context suggester could be String, Number or Boolean. However with the changes in version 5 this is failing and only accepting String. This will have problem for existing users of Elasticsearch if they choose to migrate to higher version; as their existing Mapping and query will fail as mentioned in a bug #22358
This PR fixes the above mentioned issue and allows user to migrate seamlessly.
Closes#22358
This change makes the build info initialization only try to load a jar
manifest if it is the elasticsearch jar. Anything else (eg a repackaged
ES for use of transport client in an uber jar) will contain "Unknown"
for the build info as it does for tests currently.
fixes#21955
It can easily happen that we touch a logger before logging is configured
due to chains of static intializers and other such scenarios. This
commit adds detection for this mechanism that will fail startup if we
touch a logger before logging is configured. This is a bug that will
cause builds to fail.
Relates #24076
This is required in master now that #24071 is in or else we fail during BWC testing because the 5.x branch contains 5.5 but the build thinks it should contain 5.4.
Adding parsing of InternalCardinality xContent output. Parsing method will return a new
implementation of the Cardinality interface, ParsedCardinality.
Currently we can run into test errors by accidently using e.g. a "simple"
analyzer on a numeric field which might lead to number parsing errors. While
these errors are correct, we should avoid these combinations in our regular
tests.
The S3 repostiory has many levels of settings it looks at to create a
repository, and these settings were read at repository creation time.
This meant secure settings like access and secret keys had to be
available after node construction. This change makes setting loading for
every except repository level settings eager, so that secure settings
can be stashed, and the keystore can once again be closed after
bootstrapping the node is complete.
Today Elasticsearch and other CLI tools that rely on environment aware
command leniently accept duplicate settings with the last one
winning. This commit removes this leniency.
Relates #24053
This is related to #23893. This commit allows users to use wilcards for
cluster names when executing a cross cluster search.
So instead of defining every cluster such as:
GET one:*,two:*,three:*/_search
A user could just search:
GET *:*/_search
As ":" characters are currently allowed in index names, if the text
up to the first ":" does not match a defined cluster name, the entire
string is treated as an index name.
All our actions that are invoked from rest actions have corresponding
transport actions. This adds the transport action for RestRemoteClusterInfoAction
for consistency.
Relates to #23969
When a primary relocation completes while there are ongoing replica recoveries, the recoveries for these replicas need to be restarted (as a new primary is in charge of replicating changes). Before this commit, the need for a recovery restart was detected by the data nodes that had the replicas, by checking on each cluster state update if the recovery process had completed before the recovery source changed. That code had a race, however, which could lead to a not-fully recovered shard exposing itself as started (see #23904).
This commit takes a different approach: When the primary relocation completes and the master updates the cluster state to move the primary shard from relocating to started, it will reinitialize all initializing replica shards, by giving them a fresh allocation id. Data nodes that have the replica shard will simply detect that the allocation id changed and restart the recovery process (instead of trying to determine the need to restart based on ongoing recoveries).
Note: Removal of the code in IndicesClusterStateService that checks whether the recovery source has changed will not be backported to the 5.x branch. This ensures backward compatibility for the situation where the master node is older and does not have the code changes that have been introduced in this PR.
Closes#23904
The `AsyncBulkByScrollActionTests` were brittle because they used the
current time. That was a mistake. This removes the current time from
the test, instead adding it to the parameters passed in to the
appropriate methods. This means that we take the current time slightly
earlier in all cases, but that shouldn't make a difference.
Closes#24005
Example failure:
https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+nfs/161/consoleFull
Some systems like GCE rely on a plaintext file containing credentials.
Rather than extract the information out of that credentials file and
store each peace individually in the keystore, it is cleaner to just
store the entire file.
This commit adds support to the keystore wrapper for secure file
settings. These are settings that contain an entire file that would
normally be stored on the local filesystem. Retrieving the file returns
an input stream to the file contents. This also adds a `add-file`
command to the keystore cli.
In order to support both strings and files as values for settings, the
metadata format of the keystore has also been updated (with backcompat)
to keep a map of setting name to type.
We are still carrying some legacy code that deals with lucene indices
that don't have checksums. Yet, we do not support these indices
for a while now, in fact since version 5.0 such an index is not supported
anymore. This commit removes all the special handling and leniency involved.
Now that we have incremental reduce functions for topN and aggregations
we can set the default for `action.search.shard_count.limit` to unlimited.
This still allows users to restrict these settings while by default we executed
across all shards matching the search requests index pattern.
The getProperty method is an internal method needed to run pipeline aggregations and retrieve info by path from the aggs tree. It is not needed in the MultiBucketsAggregation.Bucket interface, which is returned to users running aggregations from the transport client. The method is moved to the InternalMultiBucketAggregation class as that's where it belongs.
The `getProperty` method is an internal method needed to run pipeline aggregations and retrieve info by path from the aggs tree. It is not needed in the `Aggregations` interface, which is returned to users running aggregations from the transport client. Furthermore, the method is currenty unused by pipeline aggs too, as only InternalAggregation#getProperty is used. It can then be removed
We deprecated this method in the past because we thought it was a temporary thing that could go away over time. We radically trimmed down the usages of a context while parsing when we got rid of the ParseFieldMatcher, but the usages that are left are legit and we will hardly get rid of them. Also, working on aggs parsing we will need a context to carry around the aggregation name that gets parsed through XContentParser#namedObject .
After two nodes are being stopped and two more are joining the cluster, we first have to wait on the cluster to consist of the right nodes before
waiting on green status, otherwise we might get a green status for a cluster with dead nodes.
_field_stats has evolved quite a lot to become a multi purpose API capable of retrieving the field capabilities and the min/max value for a field.
In the mean time a more focused API called `_field_caps` has been added, this enpoint is a good replacement for _field_stats since he can
retrieve the field capabilities by just looking at the field mapping (no lookup in the index structures).
Also the recent improvement made to range queries makes the _field_stats API obsolete since this queries are now rewritten per shard based on the min/max found for the field.
This means that a range query that does not match any document in a shard can return quickly and can be cached efficiently.
For these reasons this change deprecates _field_stats. The deprecation should happen in 5.4 but we won't remove this API in 6.x yet which is why
this PR is made directly to 6.0.
The rest tests have also been adapted to not throw an error while this change is backported to 5.4.
This commit adds support for incremental top N reduction if the number of
expected shards in the search request is high enough. The changes here
also clean up more code in SearchPhaseController to make the separation
between values that are the same on each search result and values that
are per response. The reduced search phase result doesn't hold an arbitrary
result to obtain values like `from`, `size` or sort values which is now
cleanly encapsulated.
The refactoring in #23711 hardcoded version logic for replica to assume monotonic versions. Sadly that's wrong for `FORCE` and `VERSION_GTE`. Instead we should use the methods in VersionType to detect conflicts.
Note - once replicas use sequence numbers for out of order delivery, this logic goes away.
This commit upgrades the Log4j dependencies from version 2.7 to version
2.8.2. This release includes a fix for a case where Log4j could lose
exceptions in the presence of a security manager.
Relates #23995
The ExtrasFS filesystem creates extra directories when creating temp
directories during tests to ensure that Lucene does not care about extra
files. These extra files get in our way in the plugins service tests
because some of these tests are counting only on certain directories
existing. This commit suppresses the ExtrasFS filesystem for the plugins
service tests, and fixes a test that was passing for the wrong reason
(because of the existence of an extra directory from ExtrasFS).
This commit removes some leniency from the plugin service which skips
hidden files in the plugins directory. We really want to ensure the
integrity of the plugin folder, so hasta la vista leniency.
Relates #23982
This commit removes the "legacy" feature of secure settings, which setup
a parallel setting that was a fallback in the insecure
elasticsearch.yml. This was previously used to allow the new secure
setting name to be that of the old setting name, but is now not in use
due to other refactorings. It is much cleaner to just have all secure
settings use new setting names. If in the future we want to reuse the
previous setting name, once support for the insecure settings have been
removed, we can then rename the secure setting. This also adds a test
for the behavior.
Empty meta gets printed out, which means that if the request contains an empty meta object, that is returned with the response as well. On the other hand null, meaning when the object is not in the request, is not printed out. ParsedAggregation used to not print out empty metadata, and didn't allow the null value. Aligned behaviour to the existing behaviour from InternalAggregation.
This test was sporadically failing for the following reason:
- 4 nodes (nodes 0, 1, 2, and 3) running with `minimum_master_nodes` set to 3
- we stop 2 nodes (node 0 and 3)
- wait for cluster block to be in place on all nodes
- start 2 nodes (node 4 and node 5) and do a `prepareHealth().setWaitForNodes("4")`
- then do a search request
The search request runs into the `ClusterBlockException` as the `prepareHealth().setWaitForNodes("4")` check succeeds on a cluster state that has
nodes 1, 2, 3, and 4, i.e., only one of the two new nodes has joined the cluster and only one of the two dead nodes was removed by the master
(removing the dead nodes only happens after there are again `minimum_master_nodes` nodes in the cluster).
This commit fixes the issue by reusing a method from InternalTestCluster that checks that the right nodes have rejoined the cluster.
The test assumes that two nodes leaving the cluster results in two cluster state updates on the master, which is invalidated by cluster state
batching.
Shuffling xContent breaks the order of the highlighter fields in the
internal list if the highlighter doesn't use the array syntax. In other tests we
avoid shuffling this json level, but since this is done in the base test for
aggregations we should ensure the highlight builder uses the array syntax here.
The `getProperty` method is an internal method needed to run pipeline aggregations and retrieve info by path from the aggs tree. It is not needed in the `Aggregation` interface, which is returned to users running aggregations from the transport client. The method is moved to the InternalAggregation class as that's where it belongs.
ESTestCase has methods to shuffle xContent keys given a builder or a parser. Shuffling wasn't actually doing what was expected but rather reordering the keys in their natural ordering, hence the output was always the same at every run. Corrected that and added tests, also fixed a couple of tests that were affected by this fix.
If a snapshot is taken on multiple indices, and some of them are "good"
indices that don't contain any corruption or failures, and some of them
are "bad" indices that contain missing shards or corrupted shards, and
if the snapshot request is set to partial=false (meaning don't take a
snapshot if there are any failures), then the good indices will not be
snapshotted either. Previously, when getting the status of such a
snapshot, a 500 error would be thrown, because the snap-*.dat blob for
the shards in the good index could not be found.
This commit fixes the problem by reporting shards of good indices as
failed due to a failed snapshot, instead of throwing the
NoSuchFileException.
Closes#23716
This change disables graph analysis of token streams containing a shingle or a cjk filters that produce shingle or ngram of different size. The graph analysis is disabled for phrase and boolean queries.
Closes#23918
This commit modifies the BulkProcessor to be decoupled from the
client implementation. Instead it just takes a
BiConsumer<BulkRequest, ActionListener<BulkResponse>> that executes
the BulkRequest.
When executing an index operation on the primary shard,
`TransportShardBulkAction` first parses the document, sees if there are any
mapping updates that needs to be applied, and then updates the mapping on the
master node. It then re-parses the document to make sure that the mappings have
been applied and propagated.
This adds a check that skips the second parsing of the document in the event
there was not a mapping update applied in the first case.
Fixes a performance regression introduced in #23665
Today we have several code paths to merge top docs based on the number of
search results returned from the shards. If there is a only a single shard
holding any hits we go a different code path with quite some complexity while
if there are more than one the code is basically duplicated to safe the
creation of a dense array of top docs which can be large if there are many results.
This commit removes the need of the dense array and in-turn the justification for
the optimization. This commit introduces a single code path to merge top docs.
The InternalEngine Index/Delete methods (plus satellites like version loading from Lucene) have accumulated some cruft over the years making it hard to clearly the code flows for various use cases (primary indexing/recovery/replicas etc). This PR refactors those methods for better readability. The methods are broken up into smaller sub methods, albeit at the price of less code I reused.
To support the refactoring I have considerably beefed up the versioning tests.
This PR is a spin-off from #23543 , which made it clear this is needed.
The purpose of this validation is to make sure that the master doesn't step down
due to a change in master nodes, which also means that there is no way to revert
an accidental change. Since we validate using the current cluster state (and
not the one from which the settings come from) we have to be careful and only
validate if the local node is already a master. Doing so all the time causes
subtle issues. For example, a node that joins a cluster has no nodes in its
current cluster state. When it receives a cluster state from the master with
a dynamic minimum master nodes setting int it, we must make sure we don't reject it.
Closes#23695
SingleNodeDiscoveryIT uses a hardcoded port for the purpose of binding
two nodes within the limited port range that an unconfigured unicast zen
ping hosts list would try to discover another node on. This commit at
least removes this hardcoding for the first node to come up, although
still tries to bind the second node to the limited port range after the
first node has bound.
This commit makes closing a ReleasableBytesStreamOutput release the underlying BigArray so
that we can use try-with-resources with these streams and avoid leaking memory by not returning
the BigArray. As part of this change, the ReleasableBytesStreamOutput adds protection to only release the BigArray once.
In order to make some of the changes cleaner, the ReleasableBytesStream interface has been
removed. The BytesStream interface is changed to a abstract class so that we can use it as a
useable return type for a new method, Streams#flushOnCloseStream. This new method wraps a
given stream and overrides the close method so that the stream is simply flushed and not closed.
This behavior is used in the TcpTransport when compression is used with a
ReleasableBytesStreamOutput as we need to close the compressed stream to ensure all of the data
is written from this stream. Closing the compressed stream will try to close the underlying stream
but we only want to flush so that all of the written bytes are available.
Additionally, an error message method added in the BytesRestResponse did not use a builder
provided by the channel and instead created its own JSON builder. This changes that method to use the channel builder and in turn the bytes stream output that is managed by the channel.
This commit renames the random ASCII helper methods in ESTestCase. This
is because this method ultimately uses the random ASCII methods from
randomized runner, but these methods actually only produce random
strings generated from [a-zA-Z].
Relates #23886
This commit adds a description for a parameter that was added to
BootstrapChecks#enforceLimits(BoundTransportAddress, String) without the
Javadocs having been updated.
While there are use-cases where a single-node is in production, there
are also use-cases for starting a single-node that binds transport to an
external interface where the node is not in production (for example, for
testing the transport client against a node started in a Docker
container). It's tricky to balance the desire to always enforce the
bootstrap checks when a node might be in production with the need for
the community to perform testing in situations that would trip the
bootstrap checks. This commit enables some flexibility for these
users. By setting the discovery type to "single-node", we disable the
bootstrap checks independently of how transport is bound. While this
sounds like a hole in the bootstrap checks, the bootstrap checks can
already be avoided in the single-node use-case by binding only HTTP but
not transport. For users that are genuinely in production on a
single-node use-case with transport bound to an external use-case, they
can set the system property "es.enable.bootstrap.checks" to force
running the bootstrap checks. It would be a mistake for them not to do
this.
Relates #23598
This change adds a setting property that sets the value of a setting as final.
Updating a final setting is prohibited in any context, for instance an index setting
marked as final must be set at index creation and will refuse any update even if the index is closed.
This change also marks the setting `index.number_of_shards` as Final and the special casing for refusing the updates on this setting has been removed.
This commit adds a single node discovery type. With this discovery type,
a node will elect itself as master and never form a cluster with another
node.
Relates #23595
When terminating an executor service or a thread pool, we first
shutdown. Then, we do a timed await termination. If the await
termination fails because there are still tasks running, we then
shutdown now. However, this method does not wait for actively executing
tasks to terminate, so we should again wait for termination of these
tasks before returning. This commit does that.
Relates #23889
If a test touches ElasticsearchExceptionHandle before the class
initialzer for ElasticsearchException has run, a circular class
initialization problem can arise. Namely, the class initializer for
ElasticsearchExceptionHandle depends on the class initializer for
ElasticsearchExceptionHandle which depends on the class initializer for
all the classes that extend ElasticsearchException, but these classes
can not be loaded because ElasticsearchException has not finished its
class initializer. There are tests that can trigger this before
ElasticsearchException has been loaded due to an unlucky ordering of
test execution. This commit addresses this issue by making
ElasticsearchExceptionHandle private, and then exposing methods that
provide the necessary values from ElasticsearchExceptionHandle. Touching
these methods will force the class initializer for
ElasticsearchException to run first.
Currently for field sorting we always use a custom sort field and a custom comparator source.
Though for numeric fields this custom sort field could be replaced with a standard SortedNumericSortField unless
the field is nested especially since we removed the FieldData for numerics.
We can also use a SortedSetSortField for string sort based on doc_values when the field is not nested.
This change replaces IndexFieldData#comparatorSource with IndexFieldData#sortField that returns a Sorted{Set,Numeric}SortField when possible or a custom
sort field when the field sort spec is not handled by the SortedSortFields.
Today we prevent nodes from joining when indices exists that are too old.
Yet, the opposite can happen too since lucene / elasticsearch is not forward
compatible when it gets to indices we won't let nodes join the cluster once
there are indices in the clusterstate that are newer than the nodes version.
This prevents forward compatibility issues which we never test against. Yet,
this will not prevent rolling restarts or anything like this since indices
are always created with the minimum node version in the cluster such that an index
can only get the version of the higher nodes once all nodes are upgraded to this version.
TopDocs et.al. got additional parameters to incrementally reduce
top docs. In order to add incremental reduction `CollapseTopFieldDocs`
needs to have the same properties.
Remote nodes in cross-cluster search can be marked as eligible for
acting a gateway node via a remote node attribute setting. For example,
if search.remote.node.attr is set to "gateway", only nodes that have
node.attr.gateway set to "true" can be connected to for cross-cluster
search. Unfortunately, there is a bug in the handling of these
attributes due to the use of a dangerous method
Boolean#getBoolean(String) which obtains the system property with
specified name as a boolean. We are not looking at system properties
here, but node settings. This commit fixes this situation, and adds a
test. A follow-up will ban the use of Boolean#getBoolean.
Relates #23863
This commit changes the ClusterStatsNodes.NetworkTypes so that is does
not print out empty field names when no Transport or HTTP type is defined:
```
{
"network_types": {
...
"http_types": {
"": 2
}
}
}
```
is now rendered as:
```
{
"network_types": {
...
"http_types": {
}
}
}
```