By default active, rejected and queue thread statistics are included for the index, bulk and search thread pool.
Other thread statistics of other thread pools can be included via the `h` query string parameter.
Closes#4907
In recent changes, we added missing support for `source` parameter in some REST APIs:
* #4892 : mget
* #4900 : mpercolate
* #4901 : msearch
* #4902 : mtermvectors
* #4903 : percolate
```java
BytesReference content = null;
if (request.hasContent()) {
content = request.content();
} else {
String source = request.param("source");
if (source != null) {
content = new BytesArray(source);
}
}
```
It's definitely better to have:
```java
BytesReference content = request.content();
if (!request.hasContent()) {
String source = request.param("source");
if (source != null) {
content = new BytesArray(source);
}
}
```
That said, it could be nice to have a single method to manage it for various REST actions.
Closes#4924.
We currently use the number of hot threads that we are
interested in as the value for iterating over the actual
hot threads which can lead to AIOOB is the actual number
of threads is less than the given number.
Closes#4927
- add javadocs
- remove Iterable from all multi-bucket aggregations
- all single-bucket aggregations should have getDocCount() and getAggregations()
- all multi-bucket aggregations should have getBuckets() that returns Collection
- every multi-bucket aggregation should have these methods:
- getBuckets() : Collection
- getBucketByKey(String) : Bucket
- getBucketByKey(Number) : Bucket (only for numeric buckets)
- getBucketByKey(DateTime) : Bucket (only for date buckets)
- getBucketByKey(GeoPoint) : Bucket (only for geohash_grid)
- every bucket in all multi-bucket aggregations should have these methods:
- getKey() : String
- getKeyAsText() : Text
- getKeyAsNumber() : Number (if the key can be numeric value, eg. range & histograms)
- getKeyAsGeoPoint() : GeoPoint (in case of the geohash_grid agg)
Closes#4922
This upgrade includes a fix for RAM estimation on IndexReader
that allows to expose the amount of used bytes per segment now
as a setting in Elasticsearch. (LUCENE-5373)
Additionally this bugfix release contained a small fix for highlighting
that was already ported to Elasticsearch when reported (LUCENE-5361)
Closes#4897
If a get field mapping request is issued, and all but the field can be
found, the response should return an empty JSON object instead of a 404.
Closes#4738
In order to make sure, that only the requested data is returned to the client,
a couple of fixes have been applied in the ClusterState.toXContent() method.
Also some tests were added to the yaml test suite
Closes#4885
If a preparsing of the source is needed (due to mapping configuration,
which extracts the routing/id value from the source) and the source is not
valid JSON, then the whole bulk request is failed instead of a single
BulkRequest.
This commit ensures, that a broken JSON request is not forwarded to the
destination shard and creates an appropriate BulkItemResponse, which
includes a failure.
This also implied changing the BulkItemResponse serialization, because one
cannot be sure anymore, if a response includes an ID, in case it was not
specified and could not be extracted from the JSON.
Closes#4745
After copying the index files (which are throttled), we currently throttle the translog as well. The translog phase3 part is performed under a lock, so its better not to throttle it at all, and move it as fast as possible.
Local mode modification done previously faulty. env[‘WORKSPACE’ is not
the sufficient discriminator to see if script is running under Jenkins.
This fails on the Jenkins parent jobs since those type of jobs don’t
have WORKSPACE set.
We currently run always with SecurityManager installed. To make sure we
work also without we should randomly swap it out ie. run without the
security manager.