Currently we close the store and therefor the underlying directory
when the engine / shard is closed ie. during relocation etc. We also
just close it while there are still searches going on and/or we are
recovering from it. The recoveries might fail which is ok but searches
etc. will be working like pending fetch phases.
The contract of the Directory doesn't prevent to read from a stream
that was already opened before the Directory was closed but from a
system boundary perspective and from lifecycles that we test it seems
to be the right thing to do to wait until all resources are released.
Additionally it will also help to make sure everything is closed
properly before directories are closed itself.
Note: this commit adds Object#wait & Object@#notify/All to forbidden APIs
Closes#5432
The default mustache engine was using HTML escaping which breaks queries
if used with JSON etc. This commit adds escaping for:
```
\b Backspace (ascii code 08)
\f Form feed (ascii code 0C)
\n New line
\r Carriage return
\t Tab
\v Vertical tab
\" Double quote
\\ Backslash
```
Closes#5473
Adds a new API endpoint at /_recovery as well as to the Java API. The
recovery API allows one to see the recovery status of all shards in the
cluster. It will report on percent complete, recovery type, and which
files are copied.
Closes#4637
By default the date_/histogram returns all the buckets within the range of the data itself, that is, the documents with the smallest values (on which with histogram) will determine the min bucket (the bucket with the smallest key) and the documents with the highest values will determine the max bucket (the bucket with the highest key). Often, when when requesting empty buckets (min_doc_count : 0), this causes a confusion, specifically, when the data is also filtered.
To understand why, let's look at an example:
Lets say the you're filtering your request to get all docs from the last month, and in the date_histogram aggs you'd like to slice the data per day. You also specify min_doc_count:0 so that you'd still get empty buckets for those days to which no document belongs. By default, if the first document that fall in this last month also happen to fall on the first day of the **second week** of the month, the date_histogram will **not** return empty buckets for all those days prior to that second week. The reason for that is that by default the histogram aggregations only start building buckets when they encounter documents (hence, missing on all the days of the first week in our example).
With extended_bounds, you now can "force" the histogram aggregations to start building buckets on a specific min values and also keep on building buckets up to a max value (even if there are no documents anymore). Using extended_bounds only makes sense when min_doc_count is 0 (the empty buckets will never be returned if the min_doc_count is greater than 0).
Note that (as the name suggest) extended_bounds is **not** filtering buckets. Meaning, if the min bounds is higher than the values extracted from the documents, the documents will still dictate what the min bucket will be (and the same goes to the extended_bounds.max and the max bucket). For filtering buckets, one should nest the histogram agg under a range filter agg with the appropriate min/max.
Closes#5224
The test currently uses ensureGreen to guaranty all shard allocations has happened. This only guaranties it from the cluster perspective and in some cases the nodes are not fast enough to implement the changes (which is what the test is about).
Default precision was computed based on the number of MULTI_BUCKET parents
instead of PER_BUCKET.
The ordinals-based execution mode was almost always used although ordinals
might have non-negligible memory usage compared to the counters.
Close#5452
* moved `geo_point` parsing to GeoUtils
* cleaned up `gzipped.json` for bulktest
* merged `GeoPointFieldMapper` and `GeoPoint` parsing methods
Closes#5390
Instead of using byte arrays, pass the BytesReference to the actual translog file, and use the new copyTo(channel) method to write. This will improve by not potentially having to convert the data to a byte array
closes#5463