This change replaces the fields parameter with stored_fields when it makes sense.
This is dictated by the renaming we made in #18943 for the search API.
The following list of endpoint has been changed to use `stored_fields` instead of `fields`:
* get
* mget
* explain
The documentation and the rest API spec has been updated to cope with the changes for the following APIs:
* delete_by_query
* get
* mget
* explain
The `fields` parameter has been deprecated for the following APIs (it is replaced by _source filtering):
* update: the fields are extracted from the _source directly.
* bulk: the fields parameter is used but fields are extracted from the source directly so it is allowed to have non-stored fields.
Some APIs still have the `fields` parameter for various reasons:
* cat.fielddata: the fields paramaters relates to the fielddata fields that should be printed.
* indices.clear_cache: used to indicate which fielddata fields should be cleared.
* indices.get_field_mapping: used to filter fields in the mapping.
* indices.stats: get stats on fields (stored or not stored).
* termvectors: fields are retrieved from the stored fields if possible and extracted from the _source otherwise.
* mtermvectors:
* nodes.stats: the fields parameter is used to concatenate completion_fields and fielddata_fields so it's not related to stored_fields at all.
Fixes#20155
Today we add a prefix when logging within Elasticsearch. This prefix
contains the node name, and index and shard-level components if
appropriate.
Due to some implementation details with Log4j 2 , this does not work for
integration tests; instead what we see is the node name for the last
node to startup. The implementation detail here is that Log4j 2 there is
only one logger for a name, message factory pair, and the key derived
from the message factory is the class name of the message factory. So,
when the last node starts up and starts setting prefixes on its message
factories, it will impact the loggers for the other nodes.
Additionally, the prefixes are lost when logging an exception. This is
due to another implementation detail in Log4j 2. Namely, since we log
exceptions using a parameterized message, Log4j 2 decides that that
means that we do not want to use the message factory that we have
provided (the prefix message factory) and so logs the exception without
the prefix.
This commit fixes both of these issues.
Relates #20429
This commit cuts over geo_point fields to use Lucene's new point-based LatLonPoint type for indexes created in 5.0. Indexes created prior to 5.0 continue to use their respective encoding type. Below is a description of the changes made to support the new encoding type:
* New indexes use a new LatLonPointFieldMapper which provides a parse method for the new type
* The new LatLonPoint parse method removes support for lat_lon and geohash parameters
* Backcompat testing for deprecated lat_lon and geohash parameters is added to all unit and integration tests
* LatLonPointFieldMapper provides DocValues support (enabled by default) which uses Lucene's new LatLonDocValuesField type
* New LatLonPoint field data classes are added for aggregation support (wraps LatLonPoint's Numeric Doc Values)
* MultiFields use the geohash as the string value instead of the lat,lon string making it easier to perform geo string queries on the geohash instead of a lat,lon comma delimited string.
Removed Features:
* With the removal of geohash indexing, GeoHashCellQuery support is removed for all new indexes (still supported on existing indexes)
* LatLonPoint does not support a Distance Range query because it is super inefficient. Instead, the geo_distance_range query should be accomplished using either the geo_distance aggregation, sorting by descending distance on a geo_distance query, or a boolean must not of the excluded distance (which is what the distance_range query did anyway).
TODO:
* fix/finish yaml changes for plugin and rest integration tests
* update documentation
When generating random bogus documents, it could happen that they contain both the terms "the" and "ultimate", which would match the query "the ultimate" better than all the other non bogus documents, which would cause testCrossFieldMode to fail. "the" is a term that's relatively likely to be randomly generated given its length; we can simply increase the minimum length of randomly generated terms to 5, so that there are no collisions, as "the" cannot be generated anymore (nor can "ultimate" as the lenght doesn't go up to 8).
Also made some assertions more accurate to check how many hits match a query rather than checking only that the first or second hits are there.
Closes#18873
This commit adds a -q/--quiet option to Elasticsearch so that it does not log anything in the console and closes stdout & stderr streams. This is useful for SystemD to avoid duplicate logs in both journalctl and /var/log/elasticsearch/elasticsearch.log while still allows the JVM to print error messages in stdout/stderr if needed.
closes#17220
This commit adds a health status parameter to the cat indices API for
filtering on indices that match the specified status (green|yellow|red).
Relates #20393
Previously when trying to listen on virtual interfaces during
bootstrap the application would stop working - the interface
couldn't be found by the NetworkUtils class.
The NetworkUtils utilize the underlying JDK NetworkInterface
class which, when asked to lookup by name only takes physical
interfaces into account, failing at virtual (or subinterfaces)
ones (returning null).
Note that when interating over all interfaces, both physical and
virtual ones are taken into account.
This changeset asks for all known interfaces, iterates over them
and matches on the given name as part of the loop, allowing it
to catch both physical and virtual interfaces.
As a result, elasticsearch can now also serve on virtual
interfaces.
A test case has been added which makes sure that all
iterable interfaces can be found by their respective name.
Note that this PR is a second iteration over the previously
merged but later reverted #19537 because it causes tests
to fail when interfaces are down. The test has been modified
to take this into account now.
Closes#17473Closes#19568
Relates #19537
Match query throws parsing errors when an array of terms is provided, we should test that to make sure this behaviour doesn't change.
Relates to #15741
Once a primary is marked as relocated, we can not safely move it back to started (as we have no way of waiting on inflight operations that are performed on the target primary). If the master cancels the relocation in that state, we fail the primary. Sadly, there is a racing condition between the `updateRoutingEntry` method (which is called when the relocation is cancelled by the master) and the `relocated` method. That racing condition can leave the shard as marked "relocated" but have the routing entry not reflect the target relocation. This in turns causes NPEs in TransportReplicationAction:
```
java.util.Objects requireNonNull Objects.java 203
org.elasticsearch.action.support.replication.TransportReplicationAction$ConcreteShardRequest <init> TransportReplicationAction.java 982
```
Sadly, once we end up in this state, we will never recover.
This commit fixes that race condition by making sure `updateRoutingEntry` acquires the mutex when checking for the relocated status. While at it, I also tightened up the code and added lots of assertions/hard checks.
Adds the entire DiscoveryNode object to the trace log in AllocationDeciders.
The allocation decider logging at TRACE level can sometimes be helpful to determine why a shard is not getting allocated on specific nodes. Currently, we only log the node id for these messages. It will be helpful to also include the node name (esp. when dealing with a lot of nodes in the cluster).
In 5.x we allowed this with a deprecation warning. This removes the code
added for that deprecation, requiring the cluster name to not be in the
data path.
Resolves#20391
This change removes the guice dependency handling for SearchService and
several related classes like SearchTransportController and SearchPhaseController.
The latter two now have package private constructors and dependencies like FetchPhase
are now created by calling their constructors explicitly. This also cleans up several users
of the DefaultSearchContext and centralized it's creation inside SearchService.
The cause early termination of tests, which means we don't clean up and close shards, but also don't cause a failure. This in turns makes TestRuleTemporaryFilesCleanup fail on windows (because it does try to clean up, but the files are referenced). Getting stuff like:
```
> C:\jenkins\workspace\es_core_master_windows-2012-r2\core\build\testrun\test\J3\temp\org.elasticsearch.index.shard.IndexShardTests_68B5E1103D78A58B-001\tempDir-006\indices\_na_\0\translog\translog-1.tlog: java.nio.file.AccessDeniedException: C:\jenkins\workspace\es_core_master_windows-2012-r2\core\build\testrun\test\J3\temp\org.elasticsearch.index.shard.IndexShardTests_68B5E1103D78A58B-001\tempDir-006\indices\_na_\0\translog\translog-1.tlog
```
Splits the PrimaryShardAllocator and ReplicaShardAllocator's decision
making for a shard from the implementation of that decision on the
routing table. This is a step toward making it easier to use the same
logic for the cluster allocation explain APIs.
Introduce a base class for unit tests that are based on real `IndexShard`s. The base class takes care of all the little details needed to create and recover shards.
This commit also moves `IndexShardTests` and `ESIndexLevelReplicationTestCase` to use the new base class. All tests in `IndexShardTests` that required a full node environment were moved to a new `IndexShardIT` suite.
Before, when there was a new cluster state to publish,
zen discovery would first update the set of nodes to
ping based on the new cluster state, then publish the new
cluster state. This is problematic because if the cluster
state failed to publish, then the set of nodes to ping
should not have been updated.
This commit fixes the issue by updating the set of
nodes to ping for fault detection only *after* the new
cluster state has been published.
We should rather throw a clear exception to clearly point out that we cannot extract fields from _source. Note that this happens only when explicitly trying to extract fields from source. When source is disabled and no _source parameter is specified, no errors will be thrown and no source will be returned.
Closes#20408
Relates to #20093
Search section supports an ext section that is used to provide additional config needed from plugins. It is now tied to sub fetch phases because it is the only section that may need additional config, but there is no reason for the two to be tightly coupled.
It is now possible to register a searchExtParser independently from a sub fetch phase. All a search ext parser does is parsing some ext section of a search request, whose parsed resulting object is stored in the search context for later retrieval.
The parser is now needed only for sub fetch phases, but doesn't have to be strictly connected to them, it could be used for something else as well potentially
The context was an object where the parsed info are stored. That is more of what we call the builder since after the search refactoring. No need for generics in FetchSubPhaseParser then. Also the previous setHitsExecutionNeeded wasn't useful, it can be removed as well, given that once there is a parsed ext section, it will become a builder that can be retrieved by the sub fetch phase. The sub fetch phase is responsible for doing nothing in case the builder is not set, meaning that the fetch sub phase is plugged in but the request didn't have the corresponding section.
SearchParseElement is renamed to FetchSubPhaseParser and moved to the search.fetch package. Its parse method doesn't get the SearchContext as argument anymore, only the XContentParser, and the return type is what gets parsed (the fetch sub phase context which we may as well rename later).
It is the parser that initializes the FetchSubPhaseContext then. SearchService retrieves the parser by name, calls parse against it and stores the result of parsing by name. No need for FetchSubPhase.ContextFactory anymore, which can be removed.
Given that doc value fields is our own fetch sub phase, it doesn't need to be implemented like if it was plugged in from the outside. It doesn't need its own fetch sub phase context, but it can just be an instance member in SearchContext
Parse elements are always empty for all of our search phases. They can be non empty only for sub fetch phases as they are pluggable and search parse element is left to be used only for plugins that plug in their own sub fetch phase. Only FetchPhase needs now the parseElements method then.
Log4j has a bug where on shutdown it ignores that JMX might be disabled;
since it does not respect this on shutdown, it proceeds to attempt to
access JMX leading to a security exception that should have otherwise
not occurred had it respected that JMX is disabled. This commit
intentionally introduces jar hell with the Server class to work around
this bug until a fix is released.
Relates #20389
Previously we would disable console logging in certain circumstances
(for example, if Elasticsearch is not in the foreground, or if
Elasticsearch is in the foreground but an exception was thrown during
bootstrap). This commit makes this handling work with Log4j 2. This will
prevent users from seeing double bootstrap check failure messages.
Relates #20387
Since the sub query of a function score query is checked on CustomQueryScorer#extractUnknwonQuery we try to extract the terms from the rewritten form of the sub query.
MultiTermQuery rewrites query within a constant score query/weight which returns an empty array when extractTerms is called.
The extraction of the inner terms of a constant score query/weight changed in Lucene somewhere between ES version 2.3 and 2.4 (https://issues.apache.org/jira/browse/LUCENE-6425) which is why this problem occurs on ES > 2.3.
This change moves the extraction of the sub query from CustomQueryScorer#extractUnknownQuery to CustomQueryScorer#extract in order to do the extraction of the terms on the original form of the sub query.
This fixes highlighting of sub queries that extend MultiTermQuery since there is a special path for this kind of query in the QueryScorer (which extract the terms to highlight).