When overriding a systemd configuration via a drop-in file, the
[Service] header is required. This commit adds this to an example
drop-in override in the configuring docs.
Relates #22038
This commit enhances the allocator decision result objects (namely,
AllocateUnassignedDecision, MoveDecision, and RebalanceDecision)
to enable them to be used directly by the cluster allocation explain API. In
particular, this commit does the following:
- Adds serialization and toXContent methods to the response objects,
which will form the explain API responses.
- Moves the calculation of the final explanation to the response
object itself, removing it from the responsibility of the allocators.
- Adds shard store information to the NodeAllocationResult, so that
store information is available for each node, when explaining a
shard allocation by the PrimaryShardAllocator or the ReplicaShardAllocator.
- Removes RebalanceDecision in favor of using MoveDecision for both
moving and rebalancing shards.
- Removes NodeRebalanceResult in favor of using NodeAllocationResult.
- Changes the notion of weight ranking to be relative to the current node,
instead of an absolute weight that doesn't convey any added value to the
API user and can be confusing.
- Introduces a new enum AllocationDecision to convey the decision type,
which enables conveying unassigned, moving, and rebalancing scenarios
with more detail as opposed to just Decision.Type and AllocationStatus.
* Ingest: Moved ingest invocation into index/bulk actions
Ingest was originally setup as a plugin, and in order to hook into the
index and bulk actions, action filters were used. However, ingest was
later moved into core, but the action filters were never removed. This
change moves the execution of ingest into the index and bulk actions.
* Address PR comments
* Remove forwarder direct dependency on ClusterService
The use of the avg aggregation for sorting the terms aggregation is not encouraged since it has unbounded error. This changes the examples to use the max aggregation which does not suffer the same issues
This adds a fromXContent method and unit test to the HighlightField class so we
can parse it as part of a serch response. This is part of the preparation for
parsing search responses on the client side.
Failing an initializing primary when shadow replicas are enabled for the index can leave the primary unassigned with replicas being active. Instead, a replica should be promoted to primary, which is fixed by this commit.
This commit makes two changes to how the in-sync allocations set is updated:
- the set is only trimmed when it grows. This prevents trimming too eagerly when the number of replicas was decreased while shards were unassigned.
- the allocation id of an active primary that failed is only removed from the in-sync set if another replica gets promoted to primary. This prevents the situation where the only available shard copy in the cluster gets removed the in-sync set.
Closes#21719
We had tests for the regular factories, but not for the pre-built ones, that
ship by default without requiring users to define them in the analysis settings.
Currently we expose the internal representation that we use for ip addresses,
which are the ipv6 bytes. However, this is not really usable, exposes internal
implementation details and also does not work fine with other APIs that expect
that the values can be `toString`'d.
Closes#21977
`_update_by_query` supports specifying the `pipeline` to process the
documents as a url parameter but `_reindex` doesn't. It doesn't because
everything about the `_reindex` request that has to do with writing
the documents is grouped under the `dest` object in the request body.
This changes the response parameter from
`request [_reindex] contains unrecognized parameter: [pipeline]` to
`_reindex doesn't support [pipeline] as a query parmaeter. Specify it in the [dest] object instead.`
When you submit a _termvectors request for an artificial document and
specify the 'preference' parameter to send the request to a particular
shard, the request sometimes hits NPE. Fix this case by ignoring the
auto-generated artificial document ID and pick a shard per the
preference parameter, or a random shard.
This closes#21928
* Include unindexed field in FieldStats response
This change adds non-searchable fields to the FieldStats response. These fields do not have min/max informations but they can be aggregatable. Fields that are only stored in _source (store:no, index:no, doc_values:no) will still be missing since they do not have any useful information to show. Indices and clients must be at least on V_5_2_0 to see this change.
Since the removal of local discovery of #https://github.com/elastic/elasticsearch/pull/20960 we rely on minimum master nodes to be set in our test cluster. The settings is automatically managed by the cluster (by default) but current management doesn't work with concurrent single node async starting. On the other hand, with `MockZenPing` and the `discovery.initial_state_timeout` set to `0s` node starting and joining is very fast making async starting an unneeded complexity. Test that still need async starting could, in theory, still do so themselves via background threads.
Note that this change also removes the usage of `INITIAL_STATE_TIMEOUT_SETTINGS` as the starting of nodes is done concurrently (but building them is sequential)
With this commit we document that ingest processors need to be
thread-safe. Previously this could be inferred from reading the
source code but we got several user questions about this so
it is stated explicitly in the Javadocs of Processor now.
Action filters currently have the ability to filter both the request and
response. But the response side was not actually used. This change
removes support for filtering responses with action filters.
Changes the default socket and connection timeouts for the rest
client from 10 seconds to the more generous 30 seconds.
Defaults reindex-from-remote to those timeouts and make the
timeouts configurable like so:
```
POST _reindex
{
"source": {
"remote": {
"host": "http://otherhost:9200",
"socket_timeout": "1m",
"connect_timeout": "10s"
},
"index": "source",
"query": {
"match": {
"test": "data"
}
}
},
"dest": {
"index": "dest"
}
}
```
Closes#21707
The RecoveryFailedException's output prints the source and
target nodes for the recovery. However, sometimes there is
no source node for the recovery, only a target node (such as
when recovering a primary shard from disk). In this case,
we don't want to display the source node. This commit fixes
this by displaying "Recovery failed on target node.." instead
of "Recovery failed from null to target node" which is what the
output currently displays.
This commit increases the test logging on the unicast zeng ping test of
simple pings to gather more info for chasing a race condition that is
happening in this test.
Sadly, the timeouts here need to be increased to reduce the likelihood
of spurious test failures (test hosts under load are especially prone to
this). This does slow down this test suite a bit, but it's still not as
slow as it was before this endeavor of lowering these timeouts started.
This commit cleans up the unicast zen ping unknown hosts cached test:
- send pings from the same node to more clearly indicate DNS lookups
are not cached (within the same UnicastZenPing instance)
- increase ping and wait timeout to 500ms to address race conditions
(on a test host under load, the timeout was too short for the
connect/handshake/ping cycle to complete)
The port limit test is a simple test that fakes that resolving an
address with a port range results the correct address collection. This
test is subject to a race condition where the timeout on the resolve
request can fire before the resolve code finishes executing (this race
is exceptionally rare, because there are not actually any DNS lookups
being done here since we are just resolving addresses). This commit
increases the timeout here to significantly reduce the chance of a
losing race causing a spurious test failure. This increased timeout
should not increase the runtime of the test, just make failures less
likely.
The unknown hosts test is a simple test that fakes that resolving an
address results in an unknown host exception. The main purpose of this
test is to ensure that we log (and do not silently drop) when a host
fails to resolve. This test is subject to a race condition where the
timeout on the resolve request can fire before the resolve code finishes
executing (this race is exceptionally rare, because there are not
actually any DNS lookups being done here, just a mock resolve
implementation that throws an exception and that's where losing the race
can arise). This commit increases the timeout here to significantly
reduce the chance of a losing race causing a spurious test failure. This
increased timeout should not increase the runtime of the test, just make
failures less likely.
This commit renames InternalEngine#loadSeqNoStatsLucene to
InternalEngine#loadSeqNoStatsFromLucene to make this name consistent
with the method InternalEngine#loadSeqNoStatsFromLuceneAndTranslog.
* Plugins: Replace Rest filters with RestHandler wrapper
RestFilters are a complex way of allowing plugins to add extra code
before rest actions are executed. This change removes rest filters, and
replaces with a wrapper which a single plugin may provide.
Today when starting a new engine, we read the global checkpoint from the
translog only if we are opening an existing translog. This commit
clarifies this situation by distinguishing the three cases of engine
creation in the constructor leading to clearer code.
Relates #21934
Today if system call filters fail to install on startup, we log a
message but otherwise march on. This might leave users without system
call filters installed not knowing that they have implicitly accepted
the additional risk. We should not be lenient like this, instead clearly
informing the user that they have to either fix their configuration or
accept the risk of not having system call filters installed. This commit
adds a bootstrap check that if system call filters are enabled, they
must successfully install.
Relates #21940
In #21828, serialization of the host string was added to preserve this information when
a TransportAddress gets serialized. However, there is still a case where this did not always
work. In UnicastZenPings, DiscoveryNode instances are created for the ping hosts with the
minimum compatibility version, which is currently less than the version required to preserve
the host information. This means that when a node is received from a PingResponse that the
host information is no longer set correctly on the InetSocketAddress contained in the
DiscoveryNode.
This commit adds a workaround for this situation by allowing the host string to be passed
into the TransportAddress constructor that takes a StreamInput and using that as the host
for the InetAddress that is created during deserialization.