With the previous breaker limit spurious failures on transport level
can happen in the test. With the new limit we ensure that the breaker
breaks due to a too large HTTP request instead.
Previously the sigma variable in the `extended_stats_bucket` pipeline aggregation was not being serialised in `ExtendedStatsBucketPipelineAggregator`. This PR fixes that.
It also corrects the initial value of sumOfSquares to be 0.
Closes#17701
#14259 added a check to honor rebalancing policies (i.e., rebalance only on green state) when moving shards due to changes in allocation filtering rules. The rebalancing policy is there to make sure that we don't try to even out the number of shards per node when we are still missing shards. However, it should not interfere with explicit user commands (allocation filtering) or things like the disk threshold wanting to move shards because of a node hitting the high water mark.
#14259 was done to address #14057 where people reported that using allocation filtering caused many shards to be moved at once. This is however a none issue - with 1.7 (where the issue was reported) and 2.x, we protect recovery source nodes by limitting the number of concurrent data streams they can open (i.e., we can have many recoveries, but they will be throttled). In 5.0 we came up with a simpler and more understandable approach where we have a hard limit on the number of outgoing recoveries per node (on top of the incoming recoveries we already had).
This commit removes the last remaining uses of JAVA_OPTS. Now searching
the codebase for the regex '(?<!ES_)JAVA_OPTS' only shows the uses
warning of its removal and the note about it in the migration docs.
* Tokens in the same position are grouped into a SynonymQuery..
* The default operator is applied on tokens in different positions.
* The wildcard is applied to the terms in the last position only.
Fixes#2183
This commit modifies the boostrap checks to output all failing checks
instead of early-failing on the first check and then possibly failing
again after the user resolves the first failing check.
Closes#17474
This commit moves the execution of the bootstrap checks to after network
services are started. This gives us the flexibility to not merely check
if any of the network settings are set, but instead be smarter about it
and check if the network settings are set in a way that means that the
node will be communicating over an external network (either directly, or
via a proxy). As an bonanza, executing the bootstrap checks in this way
enables us to have the node name in the logs!
Closes#17570
If ts=0, cat health disable epoch and timestamp
Be Constant String timestamp and epoch
Move timestamp and epoch to Table
Add rest-api test and test
Closes#10109
With this commit we limit the size of all in-flight requests on
HTTP level. The size is guarded by the same circuit breaker that
is also used on transport level. Similarly, the size that is used
is HTTP content length.
Relates #16011
With this commit we limit the size of all in-flight requests on
transport level. The size is guarded by a circuit breaker and is
based on the content size of each request.
By default we use 100% of available heap meaning that the parent
circuit breaker will limit the maximum available size. This value
can be changed by adjusting the setting
network.breaker.inflight_requests.limit
Relates #16011
The cluster reroute API had a copy of NamedWriteableRegistry's behavior
inside it in the form of AllocationCommands#registerFactory and
AllocationCommands#lookupFactorySafe. There isn't a reason to duplicate
that effort. So this replaces all of AllocationCommand#Factory with
query-like registration in NetworkModule. Why NetworkModule? Because
the transport client needs it.
Makes Writeable not depend on StreamableReader. Keeps the default readFrom
implementation for backwards compatibility during the PROTOTYPE removal
but that'll go when those are gone.
Makes Diffable not extend StreamableReader. Instead Diffable has a readFrom
method. The PROTOTYPE removal will not get to cluster state for a long time
so that method will stay.
Now only a few things implement StreamableReader. They will be addressed
individually and then we'll remove StreamableReader.
When an index is recovered from disk it's metadata is imported first and the master reaches out to the nodes looking for shards of that index. Sometimes those requests reach other nodes before the cluster state is processed by them. At the moment, that situation disables the checking of the store, which requires the meta data (indices with custom path need to know where the data is). When corruption hits this means we may assign a shard to node with corrupted store, which will be caught later on but causes confusion. Instead we can try loading the meta data from disk in those cases.
Relates to #17630
When it comes to query parsing, either a field is tokenized and it would go
through analysis with its search_analyzer. Or it is not tokenized and the
raw string should be passed to termQuery(). Since numeric fields are not
tokenized and also declare a search analyzer, values would currently go through
analysis twice...
We have a couple places in the code base that assume that search is always done
on the inverted index. However with the new points API in Lucene 6, this is not
true anymore. This commit makes MappedFieldType.indexedValueForSearch protected
and fixes call sites to keep working for field types that use the inverted
index and either work differently ar throw an exception otherwise. For instance,
it will still be possible to run cross_fields multi match queries on numeric
fields, but the score contributions will not be blended as well as before, and
significant terms aggregations on long terms will not be possible anymore since
points do not record document frequencies.
This commit removes `MappedFieldType.value` and simplifies
`MappedFieldType.valueforSearch`. `valueforSearch` was used to post-process
values that come for stored fields (eg. to convert a long back to a string
representation of a date in the case of a date field) and also values that
are extracted from the source but only in the case of GET calls: it would
not be called when performing source filtering on search requests.
`valueforSearch` is now only called for stored fields, since values that are
extracted from the source should already be formatted as expected.
In both cases, what elasticsearch is really interested in is whether the field
is an analyzed string field. So it can just check `tokenized()` instead.
* upgrades numerics to new Point format
* updates geo api changes
* adds GeoPointDistanceRangeQuery as XGeoPointDistanceRangeQuery
* cuts over to ES GeoHashUtils
TL;DR This commit should not have any impact on terms aggs, it will just make
supporting ipv6 easier.
Currently only the numeric terms aggs propagate the DocValueFormat instance since
we use numerics to represent also dates or ip addresses. Since string terms aggs
are only used for text/keyword/string fields, they do not use the format and just
call toUt8String(). However when we support ipv6, ip addresses as well will be
encoded in sorted doc values (just like strings) so we will need to use the
DocValueFormat to format the keys.