This commit adds a simple test that JvmInfo is correctly able to
determine whether or not G1 GC is running. As the JvmInfo G1 GC logic is
only applies to HotSpot, the test is constructed to do the same. The
test determines whether or not G1 GC is running by inspecting the test
JVM argument line.
The change adds a new option to the `nested`, `has_parent`, `has_children` and `parent_id` queries: `ignore_unmapped`. If this option is set to false, the `toQuery` method on the QueryBuilder will throw an exception if the type/path specified in the query is unmapped. If the option is set to true, the `toQuery` method on the QueryBuilder will return a MatchNoDocsQuery. The default value is `false`so the queries work how they do today (throwing an exception on unmapped paths/types)
With the previous breaker limit spurious failures on transport level
can happen in the test. With the new limit we ensure that the breaker
breaks due to a too large HTTP request instead.
Previously the sigma variable in the `extended_stats_bucket` pipeline aggregation was not being serialised in `ExtendedStatsBucketPipelineAggregator`. This PR fixes that.
It also corrects the initial value of sumOfSquares to be 0.
Closes#17701
#14259 added a check to honor rebalancing policies (i.e., rebalance only on green state) when moving shards due to changes in allocation filtering rules. The rebalancing policy is there to make sure that we don't try to even out the number of shards per node when we are still missing shards. However, it should not interfere with explicit user commands (allocation filtering) or things like the disk threshold wanting to move shards because of a node hitting the high water mark.
#14259 was done to address #14057 where people reported that using allocation filtering caused many shards to be moved at once. This is however a none issue - with 1.7 (where the issue was reported) and 2.x, we protect recovery source nodes by limitting the number of concurrent data streams they can open (i.e., we can have many recoveries, but they will be throttled). In 5.0 we came up with a simpler and more understandable approach where we have a hard limit on the number of outgoing recoveries per node (on top of the incoming recoveries we already had).
This commit removes the last remaining uses of JAVA_OPTS. Now searching
the codebase for the regex '(?<!ES_)JAVA_OPTS' only shows the uses
warning of its removal and the note about it in the migration docs.
* Tokens in the same position are grouped into a SynonymQuery..
* The default operator is applied on tokens in different positions.
* The wildcard is applied to the terms in the last position only.
Fixes#2183
This commit modifies the boostrap checks to output all failing checks
instead of early-failing on the first check and then possibly failing
again after the user resolves the first failing check.
Closes#17474
This commit moves the execution of the bootstrap checks to after network
services are started. This gives us the flexibility to not merely check
if any of the network settings are set, but instead be smarter about it
and check if the network settings are set in a way that means that the
node will be communicating over an external network (either directly, or
via a proxy). As an bonanza, executing the bootstrap checks in this way
enables us to have the node name in the logs!
Closes#17570
If ts=0, cat health disable epoch and timestamp
Be Constant String timestamp and epoch
Move timestamp and epoch to Table
Add rest-api test and test
Closes#10109
With this commit we limit the size of all in-flight requests on
HTTP level. The size is guarded by the same circuit breaker that
is also used on transport level. Similarly, the size that is used
is HTTP content length.
Relates #16011
With this commit we limit the size of all in-flight requests on
transport level. The size is guarded by a circuit breaker and is
based on the content size of each request.
By default we use 100% of available heap meaning that the parent
circuit breaker will limit the maximum available size. This value
can be changed by adjusting the setting
network.breaker.inflight_requests.limit
Relates #16011
The cluster reroute API had a copy of NamedWriteableRegistry's behavior
inside it in the form of AllocationCommands#registerFactory and
AllocationCommands#lookupFactorySafe. There isn't a reason to duplicate
that effort. So this replaces all of AllocationCommand#Factory with
query-like registration in NetworkModule. Why NetworkModule? Because
the transport client needs it.
Makes Writeable not depend on StreamableReader. Keeps the default readFrom
implementation for backwards compatibility during the PROTOTYPE removal
but that'll go when those are gone.
Makes Diffable not extend StreamableReader. Instead Diffable has a readFrom
method. The PROTOTYPE removal will not get to cluster state for a long time
so that method will stay.
Now only a few things implement StreamableReader. They will be addressed
individually and then we'll remove StreamableReader.
When an index is recovered from disk it's metadata is imported first and the master reaches out to the nodes looking for shards of that index. Sometimes those requests reach other nodes before the cluster state is processed by them. At the moment, that situation disables the checking of the store, which requires the meta data (indices with custom path need to know where the data is). When corruption hits this means we may assign a shard to node with corrupted store, which will be caught later on but causes confusion. Instead we can try loading the meta data from disk in those cases.
Relates to #17630
When it comes to query parsing, either a field is tokenized and it would go
through analysis with its search_analyzer. Or it is not tokenized and the
raw string should be passed to termQuery(). Since numeric fields are not
tokenized and also declare a search analyzer, values would currently go through
analysis twice...
We have a couple places in the code base that assume that search is always done
on the inverted index. However with the new points API in Lucene 6, this is not
true anymore. This commit makes MappedFieldType.indexedValueForSearch protected
and fixes call sites to keep working for field types that use the inverted
index and either work differently ar throw an exception otherwise. For instance,
it will still be possible to run cross_fields multi match queries on numeric
fields, but the score contributions will not be blended as well as before, and
significant terms aggregations on long terms will not be possible anymore since
points do not record document frequencies.
This commit removes `MappedFieldType.value` and simplifies
`MappedFieldType.valueforSearch`. `valueforSearch` was used to post-process
values that come for stored fields (eg. to convert a long back to a string
representation of a date in the case of a date field) and also values that
are extracted from the source but only in the case of GET calls: it would
not be called when performing source filtering on search requests.
`valueforSearch` is now only called for stored fields, since values that are
extracted from the source should already be formatted as expected.
In both cases, what elasticsearch is really interested in is whether the field
is an analyzed string field. So it can just check `tokenized()` instead.
* upgrades numerics to new Point format
* updates geo api changes
* adds GeoPointDistanceRangeQuery as XGeoPointDistanceRangeQuery
* cuts over to ES GeoHashUtils
TL;DR This commit should not have any impact on terms aggs, it will just make
supporting ipv6 easier.
Currently only the numeric terms aggs propagate the DocValueFormat instance since
we use numerics to represent also dates or ip addresses. Since string terms aggs
are only used for text/keyword/string fields, they do not use the format and just
call toUt8String(). However when we support ipv6, ip addresses as well will be
encoded in sorted doc values (just like strings) so we will need to use the
DocValueFormat to format the keys.
CBOR is natively supported in Elasticsearch and allows for byte arrays.
This means, that by using CBOR the user can prevent base64 conversions
for the data being sent back and forth.
This PR adds support to extract data from a byte array in addition to
a string. This also required to add a ByteArrayValueSource class.
In #17133 we introduce request size limit handling and need a custom
channel implementation. In order to ensure we delegate all methods
it is better to have this channel implement an interface instead of
an abstract base class (so changes on the interface turn into
compile errors).
Relates #17133
Sometimes we get a test failure caused by search contexts left open.
The tests include a stack trace of the call that opened the context
but nothing else about the context. This adds more information about
the context that has been left open like what query it was running,
what shard it targeted, and whether or not it was a scroll.
Relates to #17582
When you implement an ingest factory which implements `Closeable`:
```java
public static final class Factory extends AbstractProcessorFactory<MyProcessor> implements Closeable {
@Override
public void close() throws IOException {
logger.debug("closing my processor factory");
}
}
```
The `close()` method is never called which could lead to some leak threads when we close a node.
The `ProcessorsRegistry#close()` method exists though and seems to do the right job:
```java
@Override
public void close() throws IOException {
List<Closeable> closeables = new ArrayList<>();
for (Processor.Factory factory : processorFactories.values()) {
if (factory instanceof Closeable) {
closeables.add((Closeable) factory);
}
}
IOUtils.close(closeables);
}
```
But apparently this method is never called in `Node#stop()`.
Closes#17625.
We have both `Settings.settingsBuilder` and `Settings.builder` that do exactly
the same thing, so we should keep only one. I kept `Settings.builder` since it
has my preference but also it is the one that we use in examples of the Java API.
* Create one AllField field per field eligible for _all.
* Add a positionIncrementGap (with a size of 100, not configurable) between
each entry in order to distinguish fields when doing phrase query on _all.
This removes the inconsistent output of IP addresses. The format was parsing-unfriendly and it makes it hard
to reason about API responses, such as to _nodes.
With this change in place, it will never print the hostname as part of the default format, which has the
added benefit that it can be used consistently for URIs, which was not the case when the hostname might
appear at the front with "hostname/ip:port".
CORS headers and methods config parameters must be read as arrays. This
commit fixes the issue. It affects http.cors.allow-methods and
http.cors.allow-headers.
Fixes#17483
Aggregations need to perform instanceof calls on MappedFieldType instances in
order to know how they should be parsed or formatted. Instead, we should let
the field types provide a formatter/parser that can can be used.
This change makes the root (/) rest api delegate to a transport action to get the
data for the response. This aligns this rest api with all of the other apis, which
delegate to one or more actions.
In doing this, unit tests were added to provide coverage of the RestMainAction
and the associated classes.
Because sigma is also used at reduce time, it should be passed to empty aggs.
Otherwise it causes bugs when an empty aggregation is used to perform reduction
is it would assume a sigma of zero.
Closes#17362
Today we fail if you try to add a field and an object from another type already
has the same name. However, we do NOT fail if you insert the field first and the
object afterwards. This leads to bad bugs since mappings are not necessarily
parsed in the same order at recovery time, so a mapping update could succeed and
then you would fail to reopen the index.
This commit removes a superfluous check when validing incoming cluster
states. The check in question prevents out-of-order cluster states from
the same master from entering the queue. However, such out-of-order
cluster states will be cleaned from the queue when a commit message for
that cluster state arrives or a commit message for any higher-versioned
cluster state arrives.
This removes PROTOTYPEs from ScoreFunctionsBuilders. To do so we rework
registration so it doesn't need PROTOTYPEs and lines up with the recent
changes to query registration.
This PR fixes a bug where a NPE was thrown when parsing a moving average pipeline aggregation request which did not specify a window size.
Closes#17516
This adds shuffling of xContent similar to #17521 to the aggregation and pipeline aggregation base test.
The additional shuffling uncovered that some aggregation builders internally store some properties in a
way that made the equals() testing fail when the xContent is shuffled.
For TopHitsAggregatorBuilder, the internal scriptFields parameter was changed to a set because the order
they appear in the xContent should not matter. For FiltersAggregatorBuilder, the internal list of KeyedFilters
is sorted by key now. As a side effect, the keys in the aggregation response are now not always in the same
order as the filters in the query, but sorted by key as well (unless they are anonymous).
DiscoveryNode is immutable yet we rebuild DiscoveryNode#toString on
every invocation. Most importantly, this just leads to unnecessary
allocations. This is most germane to ZenDiscovery and the processing of
cluster states where DiscoveryNode#toString is invoked when submitting
update tasks and processing cluster state updates.
Closes#17543
This fix ensures the filter and filters aggregation will not throw a NPE when `{}` is passed in as a filter. Instead `{}` is interpreted as a MatchAllDocsQuery.
Closes#17518
This commit adds a guard preventing old cluster states from entering
into the pending queue when we are following a master, and cleans old
cluster states from the pending queue when processing a commit.
* [TEST] check registered queries one by one in SearchModuleTests
* Switch to using ParseField to parse query names
If we have a deprecated query name, at the moment we don't have a way to log any deprecation warning nor fail when we are in strict mode. With this change we use ParseField, which will take care of the camel casing that we currently do manually (so that one day we can remove it more easily). This also means, that each query will have a unique preferred name, and all the other names are deprecated.
Terms query "in" synonym is now formally deprecated, as well as fuzzy_match, match_fuzzy, match_phrase and match_phrase_prefix for match query, mlt for more_like_this and geo_bbox for geo_bounding_box. All these will be removed in 6.0.
Every QueryParser holds now a ParseField constant called QUERY_NAME_FIELD that holds the name for it. The first name is the preferred one, all the others are deprecated. The first name is taken from the NAME constant already present in each query builder object, so that we somehow keep the serialization constant separated from ParseField. This change also allowed us to remove the names method from the QueryParser interface.
The `phrase` and `phrase_prefix` options in the `MatchQueryBuilder` have been deprecated in favour of using the new `MatchPhraseQueryBuilder` and `MatchPhrasePrefixQueryBuilder`. This is not a breaking change since `MatchQueryBuilder` still supports `phrase` and `phrase_prefix` but this option will be removed from the `MatchQueryBuilder` in the future (probably in 6.0)
Relates to https://github.com/elastic/elasticsearch/pull/17458#discussion_r58351998
Currently if thread cpu time is not supported (for instance, on
operating systems such as FreeBSD), an `IllegalStateException` is thrown
in `HotThreads#innerDetect()` that causes the API to return a useless
response.
This changes the check to be earlier, substituting a message for the
hot_threads output (in case some nodes *do* support it).
Additionally, if an exception is thrown during the hot_threads
generation it is now logged and the best effort output is returned.
When a shard is delayed, we now show output like:
```json
{
"shard" : {
"index" : "i",
"index_uuid" : "QzoKda9aQCG_hCaZQ18GEg",
"id" : 3,
"primary" : false
},
"assigned" : false,
"unassigned_info" : {
"reason" : "NODE_LEFT",
"at" : "2016-04-04T16:44:47.520Z",
"details" : "node_left[HyRLmMLxR5m_f58RKURApQ]"
},
"allocation_delay" : "59.9s",
"allocation_delay_ms" : 59910,
"remaining_delay" : "38.9s",
"remaining_delay_ms" : 38991,
"nodes" : {
"jKiyQcWFTkyp3htyyjxoCw" : {
"node_name" : "Landslide",
"node_attributes" : { },
"final_decision" : "YES",
"weight" : 1.0,
"decisions" : [ ]
},
"9bzF0SgoQh-G0F0sRW_qew" : {
"node_name" : "Caretaker",
"node_attributes" : { },
"final_decision" : "NO",
"weight" : 2.0,
"decisions" : [ {
"decider" : "same_shard",
"decision" : "NO",
"explanation" : "the shard cannot be allocated on the same node id [9bzF0SgoQh-G0F0sRW_qew] on which it already exists"
} ]
}
}
}
```
Where the new addition is this section:
```
"allocation_delay" : "59.9s",
"allocation_delay_ms" : 59910,
"remaining_delay" : "38.9s",
"remaining_delay_ms" : 38991,
```
Which shows the configured delay as well as the remaining delay until
the shard can be considered "assignable". This data is only shown if the
shard is unassigned.
Relates to #17372
Otherwise, when trying to calculate the amount of disk usage *after* the
shard has been allocated, it has incorrectly subtracted the shadow
replica size.
Resolves#17460
* master: (156 commits)
Make JNA calls optional
Added RPM metadata
Remove PROTOTYPE from MLT.Item
Remove PROTOTYPE from VersionType
Fix mistake in TopHits change
Remove PROTOTYPEs from highlighting
Clean up some log messages
Command line arguments with comma must be quoted on windows
Cluster Health should run on applied states, even if waitFor=0 #17440
ingest: make concrete processor impl final, like all other processor concrete impls.
Improve some test method comments.
Document task id's as string in the rest spec
Replace FieldStatsProvider with a method on MappedFieldType. #17334
cleanup test
Remove MathUtils. #17454
Addressing review comments
fix javadocs
Make TranslogConfig immutable and pass TranslogGeneration as a ctor arg to Translog
[reindex] Don't get rejected
Remove redundant commit - #openTranslog() already commits in that case
...
The introduction of max number of processes and max size virtual memory
checks inadvertently made JNA non-optional on OS X and Linux. This
commit wraps these calls in a check to see if JNA is available so that
JNA remains optional.
Closes#17492
Make TranslogConfig immutable and pass TranslogGeneration as a ctor arg to Translog
This mutable state is confusing and is easily missed. By default this is null and
wipes all translog. This commit makes the TranslogGeneration mandatory on the Translog
constructor and removes the mutalbe state.
We already protect against making decisions based on an inflight cluster state if someone asks for a waitFor rule (like wait for green). We should do the same for normal health checks as well (unless timeout is set to 0) as it be trappy to debug failures when health says the cluster is in a certain state, but that state wasn't applied yet.
Closes#17440
FieldStatsProvider had to perform instanceof calls to properly handle dates or
ip addresses. By moving the logic to MappedFieldType, each field type can check
whether all values are within bounds its way.
Note that this commit only keeps rewriting support for dates, which are the only
field for which the rewriting mechanism is likely to help (because of time-based
indices).
This mutable state is confusing and is easily missed. By default this is null and
wipes all translog. This commit makes the TranslogGeneration mandatory on the Translog
constructor and removes the mutalbe state.
Move translog recover outside of the engine
We changed the way we manage engine memory buffers to an
open model where each shard can essentially has infinite memory.
The indexing memory controller is responsible for moving memory to disk
when it's needed. Yet, this doesn't work today when we recover from store/translog
since the engine is not fully initialized such that IMC has no access to the engine,
neither to it's memory buffer nor can it move data to disk.
The biggest issue here is that translog recovery happends inside the Engine constructor
which is problematic by itself since it might take minutes and uses a not yet fully
initialzied engine to perform write operations on.
This change detaches the translog recovery and makes it the responsibility of the caller
to run it once the engine is fully constructed or skip it if not necessary.
Changes QueryParser into a @FunctionalInterface and provides a way to
register queries using that. Cuts match and function_score queries over
to that registration method as a proof of concept.
Once all queries have been cut over we can remove their PROTOTYPES.
Currently our testing of parsing query builders is limited to the
default order of the parameters that each builders toXContent()
method produces. To better test real queries where the order of
parameters can be different, this change adds a helper
method to ESTestCase that takes a XContentBuilder and randomly
shuffles the order of the fields inside an object. This is
used in AbstractQueryTestCase, but it can be used in other similar
places in the future.