Today we can easily join a cluster that holds an index we don't support since
we currently allow rolling upgrades from 5.x to 6.x. Along the same lines we don't check if we can support an index based on the nodes in the cluster when we open, restore or metadata-upgrade and index. This commit adds
additional safety that fails cluster state validation, open, restore and /or upgrade if there is an open index with an incompatible index version created in the cluster.
Realtes to #21670
Timeouts are global today across all connections this commit allows to specify
a connection timeout per node such that depending on the context connections can
be established with different timeouts.
Relates to #19719
Currently we have these logs for integration tests only.
This adds the following log at the start:
```
logger.info("[{}]: before test", getTestName());
```
and this is logged at the end, but before any clean up done in sub classes
```
logger.info("[{}]: after test", getTestName());
```
We currently treat every node equally when we establish connections to a node.
Yet, if we are not master eligible or can't hold any data there is no point in creating
a dedicated connection for sending the cluster state or running remote recoveries respectively.
The usage of STATE and RECOVERY connections on non-master and/or non-data nodes will result in an IllegalStateException.
For some fields we have a specialized implementation of a TermQuery that is specific for the field.
When these kind of fields are used in a wildcard query or a span term query it fails with an exception because they don't recognize the specialized form.
The impacted fields are [_all] and [_type] and the impacted queries are [span_term] and [wilcard].
This change handles these forms and correctly extracts the term inside them for further use.
Fixes#21882
Today when sending responses to discovery pings, we unconditionally
reply. Instead, this commit modifies the response handler to not reply
when the cluster names do not match.
This addresses a race condition identified after reducing the timeout in
UnicastZenPingTests#testSimplePings. In particular, we send pings in the
following way:
- if not connected to the node, connect to the node and after
successful handshake, send a ping
- if connected to the node, send a ping
When the ping timeout is set low, a subsequent batch of pings can race
against a connect/disconnect cycle from a prior batch of pings. In
particular, consider the following scenario:
- node A from cluster X
- node B from cluster Y
- pings are initiated from node A with node B in the hosts list
- node A will try to connect and handshake with B
- the connection will succeed, and the handshake will eventually fail due to mismatched cluster names
- on a short timeout, a second batch of pings will fire, and on this
batch node A will see that it is still connected to node B; thus, it
will immediately fire a ping to node B and node B will dutifully
respond
Relates #21894
It used to be a hybrid store between `niofs` and `mmapfs`, which we removed when
we switched to `fs` by default (which is `mmapfs` on 64-bits systems).
Currently, the `terms` query is just syctactic sugar for a `bool` query when
used in a query context. This change proposes to always generate the same query
in query and filter contexts, which is less confusing.
When users send large `terms` query to Elasticsearch, every value is stored in
an object. This change does not reduce the amount of created objects, but makes
sure these objects die young by optimizing the list storage in case all values
are either non-null instances of Long objects or BytesRef objects, which seems
to help the JVM significantly.
For the record, I also had to remove the geo-hash cell and geo-distance range
queries to make the code compile. These queries already throw an exception in
all cases with 5.x indices, so that does not hurt any more.
I also had to rename all 2.x bwc indices from `index-${version}` to
`unsupported-${version}` to make `OldIndexBackwardCompatibilityIT`
happy.
These tests using ping timeouts on the order of seconds, but this is
unnecessary since all the sockets are within the same JVM it really
should not take that long.
Relates #21874
In some cases, such as the creation of DiscoveryNode instances for unicast ping requests, the
host information was not being populated properly and instead the address string was being used.
Additionally, when serializing a DiscoveryNode and in turn a transport address, the host was not
being set on the InetAddress when deserializing the object, so even if the address was created
from a hostname, the address in the deserialized instance had no knowledge of the hostname that
was originally used.
SearchTemplateRequest to implement CompositeIndicesRequest
Given that SearchTemplateRequest effectively delegates to search when a search is being executed, it should implement the CompositeIndicesRequest interface. The subrequests method should return a single search request. When a search is not going to be executed, because we are in simulate mode, there are no inner requests, and there are no corresponding indices to that request either.
Closes#21747
These query names were all deprecated in 5.0.0:
- in is removed in favour of terms
- geo_bbox is removed in favour of geo_bounding_box
- mlt is removed in favour of more_like_this
- fuzzy_match and match_fuzzy are removed in favour of match
Set lucene version to 6.4.0-snapshot-ec38570 and update all the sha1s/license
Fix invalid combo after upgrade in query_string query. split_on_whitespace=false is disallowed if auto_generate_phrase_queries=true
Adapt the expectations of some tests to the new format of the Lucene explain output
Lucene 6.2 added index and query support for numeric ranges. This commit adds a new RangeFieldMapper for indexing numeric (int, long, float, double) and date ranges and creating appropriate range and term queries. The design is similar to NumericFieldMapper in that it uses a RangeType enumerator for implementing the logic specific to each type. The following range types are supported by this field mapper: int_range, float_range, long_range, double_range, date_range.
Lucene does not provide a DocValue field specific to RangeField types so the RangeFieldMapper implements a CustomRangeDocValuesField for handling doc value support.
When executing a Range query over a Range field, the RangeQueryBuilder has been enhanced to accept a new relation parameter for defining the type of query as one of: WITHIN, CONTAINS, INTERSECTS. This provides support for finding all ranges that are related to a specific range in a desired way. As with other spatial queries, DISJOINT can be achieved as a MUST_NOT of an INTERSECTS query.
The Transport#connectToNodeLight concepts is confusing and not very flexible.
neither really testable on a unittest level. This commit cleans up the code used
to connect to nodes and simplifies transport implementations to share more code.
This also allows to connect to nodes with custom profiles if needed, for instance
future improvements can be added to connect to/from nodes that are non-data nodes without
dedicated bulks and recovery connections.
This commit improves the decision explanation messages,
particularly for NO decisions, in the various AllocationDecider
implementations by including the setting(s) in the explanation
message that led to the decision.
This commit also returns a THROTTLE decision instead of a NO
decision when the concurrent rebalances limit has been reached
in ConcurrentRebalanceAllocationDecider, because it more accurately
reflects a temporary throttling that will turn into a YES decision
once the number of concurrent rebalances lessens, as opposed to a
more permanent NO decision (e.g. due to filtering).
When Netty listens on a socket, it specifies the established connection
backlog for the socket. On Linux, Netty tries to read the system-wide
configuration for this from /proc/sys/net/core/somaxconn and falls back
to a default value when it can not read this value. This commit grants
Netty permission to read this file so that it can honor the system-wide
configuration for the connection backlog for sockets that it is
listening on. This also removes an obnoxious stack trace that appears
when Netty logging is set to debug logging.
Relates #21840
In the past we ran yaml tests against an internal cluster, which would get restarted after each test failure, hence the client objects needed to eventually be refreshed before each test. That is why we had the initClient method to re-initialize the YamlTestClient in the execution context. We ended up though re-initializing the client unconditionally, which is not needed.
Also, ESRestTestCase recreates the RestClient against the external cluster before each test, which is not needed given that nothing changes in the external cluster.
This commit removes the initClient method from the yaml tests execution context. The YamlTestClient can be eagerly created before the first yaml test runs and then re-used in subsequent tests. Also api calls to check for nodes versions etc. are moved out of YamlTestClient to ESClientYamlSuiteTestCase. Also the RestClient is now initialized in ESRestTestCase before the first test runs, and kept around afterwards as a static member.
Basically each subclass of EsRestTestCase will have its own RestClient instance, but the client will be shared across the different tests within the same class. The yaml test suite is just a special suite, composed of 600+ tests that are loaded from files, which will share the same client instance.
This change should speed tests up as well, as we don't recreate the RestClient before each single test, and we don't call _cat/nodes either before each single test.
Integrate the patch from LUCENE-6664 into elasticsearch and
add support for handling a graph token stream in match/multi-match
queries.
This fixes longstanding bugs with multi-token synonyms returning
incorrect results with proximity queries.
This is a cleanup of the fix pushed in https://github.com/elastic/elasticsearch/pull/20400.
FiltersFunctionScoreQuery sub query should be extracted in CustomQueryScorer.extract (and not in CustomQueryScorer.extractUnknownQuery).
This does not fix any bug in this branch (it's just a cleanup) but the intent is first to clean up and then to backport in 2.x where there is a real bug.
The bug is in 2.x only because the backport of https://github.com/elastic/elasticsearch/pull/20400 in 2.x mistakenly renamed the FiltersFunctionScoreQuery to FunctionScoreQuery.
This leads to incorrect highlighting on FiltersFunctionScoreQuery in 2.x.
When a file script fails to compile, rather than logging the
exception that caused the failure this logs the xcontent of
that exception. This is both shorter and has the script stack
which is useful for figuring out why the compilation failed.
Still logs the entire stacktrace at debug level just in case
you need it.
Relates to #21733
In making changes for the 5.0 version of snapshots, a bug was
introduced where if an index-N file could not be found for an
individual shard, the backup was to iterate over all snap-*.dat
files in the shard folder to know which snapshots contain that
shard's data, but in 5.0, reading the snap-*.dat files as backup
was incorrectly passing in the blob name for the snap-*.dat file,
thereby failing to load all index files for a given snapshot
when the index-N file is missing. This condition should be rare
as there is no reason an index-N file should be absent (unless
it was deleted or there was corruption reading the file), but
nevertheless, this situation can be encountered and this commit
fixes the bug by reading the correct snap-*.dat blob name in the
shard data folder.