We will eagerly connect to all configured nodes, the liveness api call is not needed as we already do a light connect and get back the updated discovery node.
Group, List and Affix settings generate a bogus diff that turns the actual
diff into a string containing a json structure for instance:
```
"action" : {
"search" : {
"remote" : {
"" : "{\"my_remote_cluster\":\"[::1]:60378\"}"
}
}
}
```
which make reading the setting impossible. This happens for instance
if a group or affix setting is rendered via `_cluster/settings?include_defaults=true`
This change fixes the issue as well as several minor issues with affix settings that
where not accepted as valid setting today.
Today if a comma-separated list is passed to action.auto_create_index
leading and trailing whitespaces are not trimmed but since the values are
index expressions whitespaces should be removed for convenience.
Closes#21449
If no remote clusters are registered, we shouldn't even try resolving remote clusters as part of indices names. Just go ahead and treat the index as a local one, which may or may not exist. In fact, the character that we use a separator may be part of an alias name, or part of a date math expression. We will go ahead with the remote search only if the prefix before the first occurrence of the separator is an actual registered cluster. Otherwise we will treat the index as a local one.
Previously we would go remotely anytime we'd find the separator in an index name, which caused false positives with aliases and date math expressions. It is also safer to not go remotely unless there is some remote cluster registered. We'd also throw exception whenever an unknown remote cluster was referred to, but this could again conflict with date math expressions or aliases: say we have some remote cluster configured, and we are using the separator in a date math expression, we only want to go remotely when the prefix matches a configured cluster, stay local otherwise, as the index name may still be valid locally.
This commit adds `qa/multi-cluster-search` which currently does a
simple search across 2 clusters. This commit also adds support for IPv6
addresses and fixes an issue where all shards of the local cluster are searched
when only a remote index was given.
Search api looks at indices that the request relates to, if they contain a certain separator ("|" at the moment, to be better defined in the future) the index name is split into two part where the first portion is the name of a remote cluster and the second part is the name of the index expression. Remote clusters are defined as dynamic cluster settings. There are some TODOs and open question but the main functionality works.
* ClusterSearchShardsGroup to return ShardId rather than the int shard id
This allows more info to be retrieved, like the index uuid which is exposed through the ShardId object but was not available before
* Make ClusterSearchShardsResponse empty constructor public
This allows to receive such responses when sending ClusterSearchShardsRequests directly through TransportService (not using ClusterSearchShardsAction via Client), otherwise an empty response cannot be created unless the class that does it is in org.elasticsearch.action, admin.cluster.shards package
* adjust visibility of ClusterSearchShards members
The index uuid is unique across multiple clusters, while the index name is not. Using the index uuid to look up filters in the alias filters map is better and will be needed for multi cluster search.
If a bug occurs in painless compilation (not from a user, but from the
painless infrastructure), a VerifyError may be thrown when compiling the
broken generated class. This commit wraps VerifyErrors in
ScriptException so that useful information is returned to the user,
which can be passed on to the ES team for analysis.
This bug would cause a VerifyError when scripts using the === operator
were comparing a def type against a primitive type since the primitive
type wasn't being appropriately boxed.
This commit makes sure that there is only one instance of the two services rather than one per transport action that uses it.
Also, we take their initialization out of guice's hands by binding it to a specific instance. Otherwise those two objects would get created within a constructor that is called by guice. That may cause problem for instance when throwing an exception from such constructors as guice tries all over again to re-initialize objects and fills up logs with stacktraces.
* replace ShardRouting argument in AbstractSearchAsyncAction#onFirstPhaseResult with more contained String nodeId
There is no need to pass in ShardRouting if the only info read from it is the current node id, the shard id can be read directly from the ShardIterator that's already provided as an argument.
* avoid creating a new ShardId when creating a SearchShardTarget in SnapshotsService
Today when handling unreleased versions for backwards compatilibity
support, we scatted version constants across the code base and add some
asserts to support removing these constants when the version in question
is actually released. This commit improves this situation, enabling us
to just add a single unreleased version constant that can be renamed
when the version is actually released. This should make maintenance of
these versions simpler.
Relates #21760
NOTE: The result of `?.` and `?:` can't be assigned to primitives. So
`int[] someArray = null; int l = someArray?.length` and
`int s = params.size ?: 100` don't work. Do
`def someArray = null; def l = someArray?.length` and
`def s = params.size ?: 100` instead.
Relates to #21748
ShardSearchRequest was previously taking in the whole ShardRouting as a constructor argument while it only needs the ShardsId, changed that to carry over only the needed bits.
* Transport client: Fix remove address to actually work
The removeTransportAddress method of TransportClient removes the address
from the list of nodes that the client pings to sniff for nodes.
However, it does not remove it from the list of existing connected
nodes. This means removing a node is not possible, as long as that node
is still up.
This change removes the node from the connected nodes list before
triggering sampling (ie sniffing). While the fix is simple, testing was
not because there were no existing tests for sniffing. This change also
modifies the mocks used by transport client unit tests in order to allow
mocking sniffing.
* Scripting: Remove groovy scripting language
Groovy was deprecated in 5.0. This change removes it, along with the
legacy default language infrastructure in scripting.
The `error_trace` parameter turns on the `stack_trace` field
in errors which returns stack traces.
Removes documentation for `camelCase` because it hasn't worked
in a while....
Documents the internal parameters used to render stack traces as
internal only.
Closes#21708
This commit refactors the handling of bind permissions, which is in need
of a little cleanup. For example, in its current state, the code for
handling permissions for transport profiles is split across two
methods. This commit refactors this code hopefully making it easier to
work with in future changes. This change is mostly mechanical, no
functionality is changed.
Relates #21742
The test UnicastZenPing#testResolveTimeout chooses a random resolve
timeout between 1ms and 100ms. Close to the lower bound, this is far too
short and the test races against the concurrent resolves executing
before the timeout elapses. This commit increases the timeout to
something that is far less likely to race, yet will not slow the test
down since we are not doing resolves against a real DNS service anyway.
Note that we still want a short resolve timeout since we are testing
whether or not timeouts really work here (by latching one of the
resolves to respond slowly).
Add indices and filter information to search shards api output
The search shards api returns info about which shards are going to be hit by executing a search with provided parameters: indices, routing, preference. Indices can also be aliases, which can also hold filters. The output includes an array of shards and a summary of all the nodes the shards are allocated on. This commit adds a new indices section to the search shards output that includes one entry per index, where each index can be associated with an optional filter in case the index was hit through a filtered alias.
This is relevant since we have moved parsing of alias filters to the coordinating node.
Relates to #20916