Today during restart scenarios it is possible that we recover from a node that
has already been upgraded to version N+1. The node that we relocate to is
on version N and might not be able to read the index format from the node
we relocate from. This causes `IndexFormatToNewException` during
recovery but only after recovery has finished which can cause large
load spikes during the upgrade period.
Closes#4588
The node we need to lookup for attribute colelction might not be part
of the `DiscoveryNodes` anymore due to node failure or shutdown. This
commit adds a check and removes the shard from the iteration.
Closes#4589
This adds the field data circuit breaker, which is used to estimate
the amount of memory required to load field data before loading it. It
then raises a CircuitBreakingException if the limit is exceeded.
It is configured with two parameters:
`indices.fielddata.cache.breaker.limit` - the maximum number of bytes
of field data to be loaded before circuit breaking. Defaults to
`indices.fielddata.cache.size` if set, unbounded otherwise.
`indices.fielddata.cache.breaker.overhead` - a contast for all field
data estimations to be multiplied with before aggregation. Defaults to
1.03.
Both settings can be configured dynamically using the cluster update
settings API.
Today we try to detect if we need to generate the mapping or not in
the all mapper. This is error prone since it misses conditions if not
explicitly added. We should rather similate the generation instead.
This commit also adds a random test to check if the settings
of the all field mapper are correctly applied.
Closes#4579Closes#4581
today if a specific feature is disabled for term vectors with something
like 'store_term_vector_positions = false' term vectors might be disabeled
alltogether even if 'store_term_vectors=true' in the mapping. This depends on the
order of the values in the mapping since the more specific one might override
the less specific on.
Closes#4582
Currently there are two get aliases apis that both have the same functionality, but have a different response structure. The reason for having 2 apis is historic.
The GET _alias api was added in 0.90.x and is more efficient since it only sends the needed alias data from the cluster state between the master node and the node that received the request. In the GET _aliases api the complete cluster state is send to the node that received the request and then the right information is filtered out and send back to the client.
The GET _aliases api should be removed in favour for the alias api
Closes to #4539
* `ignore_unavailable` - Controls whether to ignore if any specified indices are unavailable, this includes indices that don't exist or closed indices. Either `true` or `false` can be specified.
* `allow_no_indices` - Controls whether to fail if a wildcard indices expressions results into no concrete indices. Either `true` or `false` can be specified. For example if the wildcard expression `foo*` is specified and no indices are available that start with `foo` then depending on this setting the request will fail. This setting is also applicable when `_all`, `*` or no index has been specified.
* `expand_wildcards` - Controls to what kind of concrete indices wildcard indices expression expand to. If `open` is specified then the wildcard expression if expanded to only open indices and if `closed` is specified then the wildcard expression if expanded only to closed indices. Also both values (`open,closed`) can be specified to expand to all indices.
Closes to #4436
Results in less data being sent over the wire, as the Cat API does not
need to have the whole cluster state.
Also added matchers for hasKey() for immutable open map (I think we should
add more of those to have map style assertions).
Closes#4455
This change make single shard requests fail when no routing is specified and routing has been configured to be required in the mapping. Thi
Closes#4506
The following APIs now accept the query in a top level `query` field like:
* delete_by_query
* validate_query
* count
These APIs used to accept the query directly in the request body which was inconsistent with the search and explain APIs. For this reason t
Closes#4074
Test can be run with `-Dtests.assertion.disabled=org.elasticsearch`
to run the tests without assertions to make sure assertions
don't hide any assignements etc. that introduce bugs in production.
We are mocking out some functionality to add assertions etc. or
randomize store types. We should randomly run with our defaults to make
sure we don't hide any potential problems.
Currently we only test if readers are correctly released when exceptions
occur during reopen or flush. This commit adds a test that
randomly throws exceptions during the search execution ie. when Terms
are pulled or if a docs enum is created.
This commits add doc values support to geo point using the exact same approach
as for numeric data: geo points for a given document are stored uncompressed
and sequentially in a single binary doc values field.
Close#4207
Until now, RangeAggregator was a PER_BUCKET aggregator, expecting to be always
collected with owningBUcketOrdinal == 0. However, since the number of buckets
it creates is known in advance, it can be changed to a MULTI_BUCKETS aggregator
by just multiplying the bucket ordinal by the number of ranges.
This makes aggregations that have ranges as sub aggregations of PER_BUCKET
aggregators more efficient.
Close#4550