Currently all the reroute-like methods of `AllocationService` return a result object of type `RoutingAllocation.Result`. The result object contains the new `RoutingTable` and `MetaData` plus an indication whether those were changed. The caller is then responsible of updating a cluster state with these. These means that things can easily go wrong and one can take one of these but not the other causing inconsistencies. We already have a utility method on the `ClusterState` builder that does but no one forces you to do so. Also 99% of the callers do the same thing: i.e., check if the result was changed and if so update the very same cluster state that was passed to `AllocationService`. This PR folds this pattern into `AllocationService` and changes almost all it's methods to return a new cluster state (potentially the original one). This saves some 500 lines of code.
The one exception here is the reroute API which executes allocation commands and potentially returns an explanation as well (next to the routing table and metadata). That API now returns a `CommandsResult` object which encapsulate a cluster state and the explanation.
Funny node names have been removed in #19456 and replaced by UUID. This commit removes these obsolete node names and replace them by real UUIDs in the documentation.
closes#20065
`gender.keyword` should be used instead of just `gender` or we
get an error `Fielddata is disabled on text fields by default. Set
fielddata=true on [gender] in order to load fielddata in memory by
uninverting the inverted index. Note that this can however use
Closes#20535
significant memory.`
`cluster.routing.allocation.cluster_concurrent_rebalance` setting,
clarifying in which shard allocation situations the rebalance limit
takes effect.
Closes#20529
A few of our unit tests generate a random search request body nd run tests against it. The source can optionally contain ext elements under the ext sections, which can be parsed by plugins. With this commit we introduce a plugin so that the tests don't use the one from FetchSubPhasePluginIT anymore. They rather generate multiple search ext elements. The plugin can parse and deal with all those. This extends the test coverage as we may have multiple elements with random names.
Took the chance to introduce a common test base class for search requests, called AbstractSearchTestCase, given that the setup phase is the same for all three tests around search source. Then we can have the setup isolated to the base class and the subclasses relying on it.
Closes#17685
* Throw error if query element doesn't end with END_OBJECT
Followup to #20515 where we added validation that after we parse a query within a query element, we should not get a field name. Truth is that the only token allowed at that point is END_OBJECT, as our DSL allows only one single query within the query object:
```
{
"query" : {
"term" : { "field" : "value" }
}
}
```
We can then check that after parsing of the query we have an end_object that closes the query itself (which we already do). Following that we can check that the query object is immediately closed, as there are no other tokens that can be present in that position.
Relates to #20515
I'd made some mistakes that hadn't caused the test to fail but did
slow it down and partially invalidate some of the assertions. This
fixes those mistakes.
* Fix FieldStats deserialization of `ip` field
Add missing readBytes in `ip` field deserialization
Add (de)serialization tests for all types
This change also removes the ability to set FieldStats.minValue or FieldStats.maxValue to null.
This is not required anymore since the stats are built on fields with values only.
Fixes#20516
If an index was created with pre 2.0 we should not treat it as supported
even if all segments have been upgraded to a supported lucene version.
Closes#20512
TransportService is such a central part of the core server, replacing
it's implementation is risky and can cause serious issues. This change removes the ability to
plug in TransportService but allows registering a TransportInterceptor that enables
plugins to intercept requests on both the sender and the receiver ends. This is a commonly used
and overwritten functionality but encapsulates the custom code in a contained manner.
During a networking partition, cluster states updates (like mapping changes or shard assignments)
are committed if a majority of the masters node received the update correctly. This means that the current master has access to enough nodes in the cluster to continue to operate correctly. When the network partition heals, the isolated nodes catch up with the current state and get the changes they couldn't receive before. However, if a second partition happens while the cluster
is still recovering from the previous one *and* the old master is put in the minority side, it may be that a new master is elected which did not yet catch up. If that happens, cluster state updates can be lost.
This commit fixed 95% of this rare problem by adding the current cluster state version to `PingResponse` and use them when deciding which master to join (and thus casting the node's vote).
Note: this doesn't fully mitigate the problem as a cluster state update which is issued concurrently with a network partition can be lost if the partition prevents the commit message (part of the two phased commit of cluster state updates) from reaching any single node in the majority side *and* the partition does allow for the master to acknowledge the change. We are working on a more comprehensive fix but that requires considerate work and is targeted at 6.0.
Currently, we check if a node has the same set of custom metadata as the master
before joining the cluster. This implies freshly installing a plugin that has its
custom metadata requires a full cluster restart.
Currently, we silently accept malformed query where more
than one key is defined at the top-level for query object.
If all the keys have a valid query body, only the last query
is executed, besides throwing off parsing for additional suggest,
aggregation or highlighting defined in the search request.
This commit throws a parsing exception when we encounter a query
with multiple keys.
closes#20500
The default of 30s causes some tests to timeout when running ensureGreen and similar. This is because network delays simulation blocks connect until either the connect timeout expires or the disruption configured time stops. We do *not* immediately connect when the disruption is stopped.
The only repository we can be sure is safe to clean is `fs` so we clean
any snapshots in those repositories after each test. Other repositories
like url and azure tend to throw exceptions rather than let us fetch
their contents during the REST test. So we clean what we can....
Closes#18159
`profile.asciidoc` now runs all of its command but it doesn't validate
all of the results. Writing the validation is time consuming so I only
did some of it.
Today when starting Elasticsearch without a Log4j 2 configuration file,
we end up throwing an array index out of bounds exception. This is
because we are passing no configuration files to Log4j. Instead, we
should throw a useful error message to the user. This commit modifies
the Log4j configuration setup to throw a user exception if no Log4j
configuration files are present in the config directory.
Relates #20493
The test uses a NetworkDelay that drops requests and slows down connecting. Next to that it disable node fault detection to make sure nodes are not removed before we check our publishing. Sadly that can lead to huge slow downs if the disruption hits while a node is still pinging (and tries to connect, which is slowed down). Instead we can start the disruption on the cluster state thread, making sure the result of fault detection won't be processed before we publish
LongGCDisruption simulates a Long GC by suspending all threads belonging to a node. That's fine, unless those threads hold shared locks that can prevent other nodes from running. Concretely the logging infrastructure, which is shared between the nodes, can cause some deadlocks. LongGCDisruption has protection for this, but it needs to be updated to point at log4j2 classes, introduced in #20235
This commit also fixes improper handling of retry logic in LongGCDisruption and adds a protection against deadlocking the test code which activates the disruption (and uses logging too! :)).
On top of that we have some new, evil and nasty tests.
Currently we always pass -E to the the plugin cli with the conf dir, but
this causes a very confusing error message when not giving a specific
command to the plugin cli. This change makes path.conf pass just like
path.home. These are special settings, so passing via sysprops is the
right thing to do (it is all about how we pass between shell and java
cli).
closes#18689
This was actually a byproduct of trying to remove a //norelease for
index shard setting validation in MetaDataIndexService. This //norelease
is now removed. Previously this check was *only* used by the template
service, so we validated twice, once in the Settings infrastructure and
once when actually creating the index. We now instead use the Settings
infrastructure to validate the settings for shard count.
`TransportService#registerRequestHandler` allowed to register
handlers more than once and issues an annoying warn log message when
this happens. This change simple throws an exception to prevent regsitering
the same handler more than once. This commit also removes the ability
to remove request handlers.
Relates to #20468
The refresh description should indicate that the affected shards are
refreshed as opposed to the entire index.
This was raised as a discrepancy on
discuss (https://discuss.elastic.co/t/refresh-parameter-of-index-api/59008/2)
on the .NET client that originates from code generated from the rest api
spec. The description has been updated in master but should be updated
for the 2.4.0 release.
BATS upgrade tests fails on master branch because it tries to install 2.x versions to upgrade from instead of 5.x versions. And since #18554 we should only test upgrades from 5.0.0-alpha4 versions.
This commit changes the vagrant tests so that it tries to list all the previous releases from version N-1. If nothing is found, it will fetch the current version and will run the upgrade tests with it. It works nicely with the current master 6.0.0-alpha1-SNAPSHOT. Once 5.0.0 is released it should run the test with it.
When uninstalling or upgrading elasticsearch using the RPM package some empty directories remain on the filesystem:
/usr/share/elasticsearch/bin
/usr/share/elasticsearch/lib
/usr/share/elasticsearch/modules
/usr/share/elasticsearch/modules/foo
Having empty directories in modules can prevent elasticsearch to start after an upgrade: the plugins service expects to find a plugin-descriptor.properties file in every sub directory of modules.
This PR cleans things a bit so that these empty directories are removed on upgrade/removal like it was in 2.x.
* Update rescoring docs in respect to sort
If sort is present in a query the rescore query is not executed. As long as this feature is neither implemented (see discussion in #6788) nor the combination of sort and rescoring raises an error, we should warn the user in the documentation about this.
* Missed a dot