Histogram aggregation supports an 'offset' option to move bucket boundaries.
In a histogram with buckets of size X these can be moved from 0, X, 2X, 3X,...
by an offset value of Y to Y, X+Y, 2X+Y, 3X+Y... by using the 'offset' option.
The previous 'pre_offset' and 'post_offset' options are removed in favour of
the simplified 'offset' option.
Closes#9417Closes#9505
Until now, there was no possibility to expose infos about configured
transport profiles. This commit adds the ability to expose those
information in the TransportInfo class.
The channel was well as the netty pipeline handler now also contain
the profile they were configured for, as this information cannot be
extracted elsewhere.
In addition, each profile now can set its own publish host and port,
which might be needed in case of portforwarding or using docker.
Closes#9134
This is similar to refresh, if we fail to commit the data we have to fail the
engine since in-ram data is likely discarded. Yet, it's still in translog and might
be recoverable when the node is restarted but we have to treat the engine as failed.
This callback is executed only once, on the master node during an
index's creation. An exception thrown during this listener will cancel
the index creation.
This also adds checks in `IndicesClusterStateService` for the
indexService being null as well as if the `indicesService.createIndex`
throws an exception on data nodes after an index has already been
created.
#8720 introduced a timeout mechanism for ongoing recoveries, based on a last access time variable. In the many iterations on that PR the update of the access time was lost. This adds it back, including a test that should have been there in the first place.
Closes#9506
The query-cache has an optimization to not deserialize the bytes at the shard
level. However this is a bit fragile since it assumes that serialized streams
can be concatenanted (which is not the case with shared strings) and also does
not update the QueryResult object that is held by the SearchContext. So you
need to make sure to use the right one.
With this change, the query cache just deserializes bytes into the QueryResult
object from the context.
Close#9500
Due to some unreleased refactorings we lost the persitence of
a perviously set values in MergePolicyProvider. This commit adds this
back and adds a simple unittest.
Closes#8890
provided in the search
Currently, doing a field lookup within a terms agg will restrict the
fields available to those within the types passed into the search
request. However, when doing sub aggs within a children agg, the
fields available should not be restricted to those of the search.
This change makes the field lookup use the index level mapper service.
The optimization we do in the HandlesStreamInput / Output
adds a lot of complexity with a rather unknown benefit. It tries
to compress commonly used strings and write ids instead. This
should rather be done on a lower level if at all necessary for
the small message we send over the network.
Today we have a dirty flag indicating that a refresh must
be executed. We also allow users to bypass this by setting
a force=true boolean on the refresh request / command. All
these flags are unneeded since the SearcherManager has all
the information to do the right thing if it's dirty or not.
BytesStreamOutput allows to pass the expected size but by default uses
BigArrays.PAGE_SIZE_IN_BYTES which is 16k. A common cached result ie.
a date histogram with 3 buckets is ~100byte so 16k might be very wasteful
since we don't shrink to the actual size once we are done serializing.
By passing 512 as the expected size we will resize the byte array in the stream
slowly until we hit the page size and don't waste too much memory for small query
results.
This pr removes the optimization for auto generated ids.
Previously, when ids were auto generated by elasticsearch then there was no
check to see if a document with same id already existed and instead the new
document was only appended. However, due to lucene improvements this
optimization does not add much value. In addition, under rare circumstances it might
cause duplicate documents:
When an indexing request is retried (due to connect lost, node closed etc),
then a flag 'canHaveDuplicates' is set to true for the indexing request
that is send a second time. This was to make sure that even
when an indexing request for a document with autogenerated id comes in
we do not have to update unless this flag is set and instead only append.
However, it might happen that for a retry or for the replication the
indexing request that has the canHaveDuplicates set to true (the retried request) arrives
at the destination before the original request that does have it set false.
In this case both request add a document and we have a duplicated a document.
This commit adds a workaround: remove the optimization for auto
generated ids and always update the document.
The asumtion is that this will not slow down indexing more than 10 percent,
see: http://benchmarks.elasticsearch.org/closes#8788closes#9468
today we use the length of the BytesReference which is misleading since
the reference is paged such that the length != ramBytesUsed. This can lead
to a way higher memory consuption than expected if query results are tiny
since each query result requires at least 16kb. Yet, we should rethink this
strategy for query results that are very small ie. less than 20% of the ramBytesUsed
but this commit first tries to make the acocunting correct.
Additionally, this setting can be specified in elasticsearch.yml if
desired, to pre-populate the list of methods to be added to the default
blacklist.
When making a change to this setting dynamically, the entire blacklist
is logged as well.
The `analyzer` setting is now the base setting, and `search_analyzer`
is simply an override of the search time analyzer. When setting
`search_analyzer`, `analyzer` must be set.
closes#9371
Using the `script.groovy.sandbox.method_blacklist_patch` setting, the
blacklist can be dynamically *added* to by specifying a comma-separated
list of methods (for example, "toString,size" would add .toString and
.size to the blacklist).
When the `script.groovy.sandbox.method_blacklist_patch` setting is
changed, the script cache is cleared to force new scripts to be
recompiled. Additionally the on-disk cache is cleared so that scripts in
the `config/scripts` directory are re-compiled as well.
This also fixes an issue where script engines were injected more than
once, which can cause multiple instances of the script engine per node.
Extended_stats now displays the upper and lower bounds on standard deviations (e.g. avg +/- std).
Default is to show 2 std above/below, but can be changed using the `sigma` parameter.
Accepts non-negative doubles
Closes#9356
This change makes InternalHistogram the only InternalAggregation used by the Histogram Aggregator. There is still a separate Bucket implementation and Factory implementation. All buckets are created through the factory passed into the InternalHistogram meaning and the correct factory implementation is serialised as part of the aggregation to make sure the correct bucket types are always generate.
This is needed by the Transformers (namely the derivative transformer) to allow it to generate buckets of the right type without having to know what the underlying bucket implementation is.
To properly replicate, we currently stop flushing during recovery so we can repay the translog once copying files are done. Once recovery is done, the translog will be flushed by a background thread that, by default, kicks in every 5s. In case of a recovery failure and a quick re-assignment of a new shard copy, we may fail to flush before starting a new recovery, causing it to deal with potentially even longer translog. This commit makes sure we flush immediately when the ongoing recovery count goes to 0.
I also added a simple recovery benchmark.
Closes#9439
If an index is deleted during initial state of the snapshot operation, the entire snapshot can fail with NPE. This commit improves handling of this situation and allows snapshot to continue if partial snapshots are allowed.
Closes#9024
PR #8672 addresses ambiguous polygons - those that either cross the dateline or span the map - by complying with the OGC standard right-hand rule. Since ```GeoPolygonFilter``` is self contained logic, the fix in #8672 did not address the issue for the ```GeoPolygonFilter```. This was identified in issue #5968
This fixes the ambiguous polygon issue in ```GeoPolygonFilter``` by moving the dateline crossing code from ```ShapeBuilder``` to ```GeoUtils``` and reusing the logic inside the ```pointInPolygon``` method. Unit tests are added to ensure support for coordinates specified in either standard lat/lon or great-circle coordinate systems.
closes#5968closes#9304
The InternalClusterInfoService reaches out to the nodes to get information about their disk usage and shard store size. Upon a node level error we currently remove the node info from the local cache. We should also clear the cache when we run into an error on the action level (excluding any info from all nodes).
This also adds settings for the timeout used when waiting for nodes.
Closes#9449