By default the date_/histogram returns all the buckets within the range of the data itself, that is, the documents with the smallest values (on which with histogram) will determine the min bucket (the bucket with the smallest key) and the documents with the highest values will determine the max bucket (the bucket with the highest key). Often, when when requesting empty buckets (min_doc_count : 0), this causes a confusion, specifically, when the data is also filtered.
To understand why, let's look at an example:
Lets say the you're filtering your request to get all docs from the last month, and in the date_histogram aggs you'd like to slice the data per day. You also specify min_doc_count:0 so that you'd still get empty buckets for those days to which no document belongs. By default, if the first document that fall in this last month also happen to fall on the first day of the **second week** of the month, the date_histogram will **not** return empty buckets for all those days prior to that second week. The reason for that is that by default the histogram aggregations only start building buckets when they encounter documents (hence, missing on all the days of the first week in our example).
With extended_bounds, you now can "force" the histogram aggregations to start building buckets on a specific min values and also keep on building buckets up to a max value (even if there are no documents anymore). Using extended_bounds only makes sense when min_doc_count is 0 (the empty buckets will never be returned if the min_doc_count is greater than 0).
Note that (as the name suggest) extended_bounds is **not** filtering buckets. Meaning, if the min bounds is higher than the values extracted from the documents, the documents will still dictate what the min bucket will be (and the same goes to the extended_bounds.max and the max bucket). For filtering buckets, one should nest the histogram agg under a range filter agg with the appropriate min/max.
Closes#5224
The test currently uses ensureGreen to guaranty all shard allocations has happened. This only guaranties it from the cluster perspective and in some cases the nodes are not fast enough to implement the changes (which is what the test is about).
Default precision was computed based on the number of MULTI_BUCKET parents
instead of PER_BUCKET.
The ordinals-based execution mode was almost always used although ordinals
might have non-negligible memory usage compared to the counters.
Close#5452
* moved `geo_point` parsing to GeoUtils
* cleaned up `gzipped.json` for bulktest
* merged `GeoPointFieldMapper` and `GeoPoint` parsing methods
Closes#5390
Instead of using byte arrays, pass the BytesReference to the actual translog file, and use the new copyTo(channel) method to write. This will improve by not potentially having to convert the data to a byte array
closes#5463
Today, we use ConcurrentMergeScheduler, and this can be painful since it is concurrent on a shard level, with a max of 3 threads doing concurrent merges. If there are several shards being indexed, then there will be a minor explosion of threads trying to do merges, all being throttled by our merge throttling.
Moving to serial merge scheduler will still maintain concurrency of merges across shards, as we have the merge thread pool that schedules those merges. It will just be a serial one on a specific shard.
Also, on serial merge scheduler, we now have a limit of how many merges it will do at one go, so it will let other shards get their fair chance of merging. We use the pending merges on IW to check if merges are needed or not for it.
Note, that if a merge is happening, it will not block due to a sync on the maybeMerge call at indexing (flush) time, since we wrap our merge scheduler with the EnabledMergeScheduler, where maybeMerge is not activated during indexing, only with explicit calls to IW#maybeMerge (see Merges).
closes#5447
Two changes:
1. In the StupidBackoffScorer only look for the trigram if there is a bigram.
2. Cache the frequencies in WordScorer so we don't look them up again and
again and again. This is implemented by wrapping the TermsEnum in a special
purpose wrapper that really only works in context of the WordScorer.
This provides a pretty substantial speedup when there are many candidates.
Closes#5395
* Index one at a time only rarely if doing more then 300.
* When launching async actions, take some care to make sure you don't already
have more then 150 other async actions in flight.
* When indexing in bulk split into chunks of 1000 documents.
Seen during CI tests, it could appears that the download service is not available for any reason.
This fix in test will check before each test which requires an internet access (annotated with @Network) that the download service we are testing is still working.
It won't fail the test but will mark the test as `Ignored` in case of failure.
As documented in systemd's manual pages tmpfiles.d(5) and systemd.unit(5),
a package should install its default configuration in /usr/lib, which can
be overriden by system administrators in /etc.
New locations in the rpm:
/usr/lib/systemd/system/elasticsearch.service
/usr/lib/tmpfiles.d/elasticsearch.conf