This change makes the response API object for Histogram Aggregations the same for all types of Histogram, and does the same for all types of Ranges.
The change removes getBucketByKey() from all aggregations except filters and terms. It also reduces the methods on the Bucket class to just getKey() and getKeyAsString().
The getKey() method returns Object and the actual Type is returns will be appropriate for the type of aggregation being run. e.g. date_histogram will return a DateTime for this method and Histogram will return a Number.
Related to #9049.
By default, the default value for `timestamp` is `now` which means the date the document was processed by the indexing chain.
You can now reject documents which not provide a `timestamp` value by setting `ignore_missing` to false (default to `true`):
```js
{
"tweet" : {
"_timestamp" : {
"enabled" : true,
"ignore_missing" : false
}
}
}
```
When you update the cluster to 1.5 or master, this index created with 1.4 we automatically migrate an index created with 1.4 to the 1.5 syntax.
Let say you have defined this in elasticsearch 1.4.x:
```js
DELETE test
PUT test
{
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0
}
}
PUT test/type/_mapping
{
"type" : {
"_timestamp" : {
"enabled" : true,
"default" : null
}
}
}
```
After migration, the mapping become:
```js
{
"test": {
"mappings": {
"type": {
"_timestamp": {
"enabled": true,
"store": false,
"ignore_missing": false
},
"properties": {}
}
}
}
}
```
Closes#8882.
This adds a new boolean (index.merge.scheduler.auto_throttle) dynamic
setting, default true (matching Lucene), to adaptively set the IO rate
limit for merges over time.
This is more flexible than the previous fixed rate throttling because
it responds depending on the incoming merge rate, so search-heavy
applications that are not doing much indexing will see merges heavily
throttled while indexing-heavy cases will lighten the throttle so
merges can keep up within incoming indexing.
The fixed rate throttling is still available as a fallback if things
go horribly wrong.
Closes#9243Closes#9133
Today we give the HTTP status back within the HTTP response itself and within the JSON response as well:
```sh
curl localhost:9200/
```
```js
{
"status" : 200,
"name" : "Red Wolf",
"version" : {
"number" : "2.0.0",
"build_hash" : "6837a61d8a646a2ac7dc8da1ab3c4ab85d60882d",
"build_timestamp" : "2014-08-19T13:55:56Z",
"build_snapshot" : true,
"lucene_version" : "4.9"
},
"tagline" : "You Know, for Search"
}
```
Before Elasticsearch 1.0, the type was allowed to be passed as the root
element when uploading a document. However, this was ambiguous if the
mappings also contained a field with the same name as the type. The
behavior was changed in 1.0 to not allow this, but a setting was added
for backwards compatibility. This change removes the setting for 2.0.
The header indicates to how many shard copies (primary and replicas shards) a write was supposed to go to, to how many
shard copies to write succeeded and potentially captures shard failures if writing into a replica shard fails.
For async writes it also includes the number of shards a write is still pending.
Closes#7994
additional element per segment.
This commit adds a verbose flag to the _segments api. Currently the
only additional information returned when set to true is the full
ram tree from lucene for each segment.
This is our only cache which is not 'exact' and might allow for stalled results.
Additionally, a similar cache that we have and needs to perform lookups in other
indices in order to run queries is the script index, and for this index we rely
on the filesystem cache, so we should probably do the same with terms filters
lookups.
Close#9056
The `cluster.routing.allocation.balance.primary` setting has caused
a lot of confusion in the past while it has very little benefit form a
shard allocatioon point of view. Users tend to modify this value to
evently distribute primaries across the nodes which is dangerous since
a prmiary flag on it's own can trigger relocations. The primary flag for a shard
is should not have any impact on cluster performance unless the high level feature
suffereing from primary hotspots is buggy. Yet, this setting was intended to be a
tie-breaker which is not necessary anymore since the algorithm is deterministic.
This commit removes this setting entriely.
In some situations the shard balanceing weight delta becomes negative. Yet,
a negative delta is always treated as `well balanced` which is wrong. I wasn't
able to reproduce the issue in any way other than useing the real world data
from issue #9023. This commit adds a fix for absolute deltas as well as a base
test class that allows to build tests or simulations from the cat API output.
Closes#9023
This feature adds an optional orientation parameter to the GeoJSON document and geo_shape mapping enabling users to explicitly define how they want Elasticsearch to interpret vertex ordering. The default uses the right-hand rule (counterclockwise for outer ring, clockwise for inner ring) complying with OGC Simple Feature Access standards. The parameter can be explicitly specified for an entire index using the geo_shape mapping by adding "orientation":{"left"|"right"|"cw"|"ccw"|"clockwise"|"counterclockwise"} and/or overridden on each insert by adding the same parameter to the GeoJSON document.
closes#8764
Up to now, all filters could be cached using the `_cache` flag that could be
set to `true` or `false` and the default was set depending on the type of the
`filter`. For instance, `script` filters are not cached by default while
`terms` are. For some filters, the default is more complicated and eg. date
range filters are cached unless they use `now` in a non-rounded fashion.
This commit adds a 3rd option called `auto`, which becomes the default for
all filters. So for all filters a cache wrapper will be returned, and the
decision will be made at caching time, per-segment. Here is the default logic:
- if there is already a cache entry for this filter in the current segment,
then return the cache entry.
- else if the doc id set cannot iterate (eg. script filter) then do not cache.
- else if the doc id set is already cacheable and it has been used twice or
more in the last 1000 filters then cache it.
- else if the filter is costly (eg. multi-term) and has been used twice or more
in the last 1000 filters then cache it.
- else if the doc id set is not cacheable and it has been used 5 times or more
in the last 1000 filters, then load it into a cacheable set and cache it.
- else return the uncached set.
So for instance geo-distance filters and script filters are going to use this
new default and are not going to be cached because of their iterators.
Similarly, date range filters are going to use this default all the time, but
it is very unlikely that those that use `now` in a not rounded fashion will get
reused so in practice they won't be cached.
`terms`, `range`, ... filters produce cacheable doc id sets with good iterators
so they will be cached as soon as they have been used twice.
Filters that don't produce cacheable doc id sets such as the `term` filter will
need to be used 5 times before being cached. This ensures that we don't spend
CPU iterating over all documents matching such filters unless we have good
evidence of reuse.
One last interesting point about this change is that it also applies to compound
filters. So if you keep on repeating the same `bool` filter with the same
underlying clauses, it will be cached on its own while up to now it used to
never be cached by default.
`_cache: true` has been changed to only cache on large segments, in order to not
pollute the cache since small segments should not be the bottleneck anyway.
However `_cache: false` still has the same semantics.
Close#8449
Add a new ignore_idle_threads boolean option (default true) to
/_nodes/hot_threads, to filter out threads in known idle places like
waiting on a socket select or on pulling the next task from an empty
queue.
Closes#8985Closes#8908
The setting `mapping.date.round_ceil` (and the undocumented setting
`index.mapping.date.parse_upper_inclusive`) affect how date ranges using
`lte` are parsed. In #8556 the semantics of date rounding were
solidified, eliminating the need to have different parsing functions
whether the date is inclusive or exclusive.
This change removes these legacy settings and improves the tests
for the date math parser (now at 100% coverage!). It also removes the
unnecessary function `DateMathParser.parseTimeZone` for which
the existing `DateTimeZone.forID` handles all use cases.
Any user previously using these settings can refer to the changed
semantics and change their query accordingly. This is a breaking change
because even dates without datemath previously used the different
parsing functions depending on context.
closes#8598closes#8889
I replaced "high frequent terms" with "high frequency terms" and "low frequent terms" with "low frequency terms".
Alternatively, we could write, "highly frequent terms" and "minimally frequent terms" (or just "rare terms").
Closes#8962