Extended_stats now displays the upper and lower bounds on standard deviations (e.g. avg +/- std).
Default is to show 2 std above/below, but can be changed using the `sigma` parameter.
Accepts non-negative doubles
Closes#9356
This change makes InternalHistogram the only InternalAggregation used by the Histogram Aggregator. There is still a separate Bucket implementation and Factory implementation. All buckets are created through the factory passed into the InternalHistogram meaning and the correct factory implementation is serialised as part of the aggregation to make sure the correct bucket types are always generate.
This is needed by the Transformers (namely the derivative transformer) to allow it to generate buckets of the right type without having to know what the underlying bucket implementation is.
To properly replicate, we currently stop flushing during recovery so we can repay the translog once copying files are done. Once recovery is done, the translog will be flushed by a background thread that, by default, kicks in every 5s. In case of a recovery failure and a quick re-assignment of a new shard copy, we may fail to flush before starting a new recovery, causing it to deal with potentially even longer translog. This commit makes sure we flush immediately when the ongoing recovery count goes to 0.
I also added a simple recovery benchmark.
Closes#9439
If an index is deleted during initial state of the snapshot operation, the entire snapshot can fail with NPE. This commit improves handling of this situation and allows snapshot to continue if partial snapshots are allowed.
Closes#9024
PR #8672 addresses ambiguous polygons - those that either cross the dateline or span the map - by complying with the OGC standard right-hand rule. Since ```GeoPolygonFilter``` is self contained logic, the fix in #8672 did not address the issue for the ```GeoPolygonFilter```. This was identified in issue #5968
This fixes the ambiguous polygon issue in ```GeoPolygonFilter``` by moving the dateline crossing code from ```ShapeBuilder``` to ```GeoUtils``` and reusing the logic inside the ```pointInPolygon``` method. Unit tests are added to ensure support for coordinates specified in either standard lat/lon or great-circle coordinate systems.
closes#5968closes#9304
The InternalClusterInfoService reaches out to the nodes to get information about their disk usage and shard store size. Upon a node level error we currently remove the node info from the local cache. We should also clear the cache when we run into an error on the action level (excluding any info from all nodes).
This also adds settings for the timeout used when waiting for nodes.
Closes#9449
When we renamed all of the transport actions in #7105, shard started and failed were flipped around by mistake. This commit fixes their naming.
Closes#9440
This change makes the response API object for Histogram Aggregations the same for all types of Histogram, and does the same for all types of Ranges.
The change removes getBucketByKey() from all aggregations except filters and terms. It also reduces the methods on the Bucket class to just getKey() and getKeyAsString().
The getKey() method returns Object and the actual Type is returns will be appropriate for the type of aggregation being run. e.g. date_histogram will return a DateTime for this method and Histogram will return a Number.
Apparently some filesystems such as ZFS and occasionally NTFS can report
filesystem usages that are negative, or above the maximum total size of
the filesystem. This relaxes the constraints on `DiskUsage` so that an
exception is not thrown.
If 0 is passed as the totalBytes, `.getFreeDiskAsPercentage()` will
always return 100.0% free (to ensure the disk threshold decider fails
open)
Fixes#9249
Relates to #9260
This bug was introduced by #8454 which allowed the childFilter to only be consumed once. By adding the child docid buffering multiple buckets can now be emitted by the same doc id. This child docid buffering only happens in the scope of the current root document, so the amount of child doc ids buffered is small.
Closes#9317Closes#9346
The fix is the move the parent filter resolving from the nextReader(...) method to the collect(...) method, because only then any parent nested filter's parent filter is then properly instantiated.
Closes#9280Closes#9335
We want to check if at least the primaries succeeded if we do not
wait for green and not if all succeeded if we wait for green.
That was a misconception in c617af37e8
Closes#9402.
Squashed commit of the following:
commit 85c71b6478441a73738c81f02257193f9837f3ba
Author: Robert Muir <rmuir@apache.org>
Date: Sat Jan 24 11:24:36 2015 -0500
upgrade to lucene r1654549 snapshot
Requests are sent to two shard copies in case a shard is relocating.
This will show up in the the _shards header. Therefore we must check
with greaterThanOrEqualTo(..).
Adding missing support for the multi-index query parameters 'ignore_unavailable',
'allow_no_indices' and 'expand_wildcards' to '_cluster/state' API. These
parameters are supposed to be supported for APIs that work across multiple indices.
So far overwriting the default settings per REST call was not possible which is
fixed here.
Closes#5229Closes#9295
These two tests are confusing because they have the same class name in
different packages. This results in accidentally looking at the wrong
file when trying to open the test by class name. They are also
not "simple"..