Histogram aggregation supports an 'offset' option to move bucket boundaries.
In a histogram with buckets of size X these can be moved from 0, X, 2X, 3X,...
by an offset value of Y to Y, X+Y, 2X+Y, 3X+Y... by using the 'offset' option.
The previous 'pre_offset' and 'post_offset' options are removed in favour of
the simplified 'offset' option.
Closes#9417Closes#9505
The `analyzer` setting is now the base setting, and `search_analyzer`
is simply an override of the search time analyzer. When setting
`search_analyzer`, `analyzer` must be set.
closes#9371
Extended_stats now displays the upper and lower bounds on standard deviations (e.g. avg +/- std).
Default is to show 2 std above/below, but can be changed using the `sigma` parameter.
Accepts non-negative doubles
Closes#9356
This change makes the response API object for Histogram Aggregations the same for all types of Histogram, and does the same for all types of Ranges.
The change removes getBucketByKey() from all aggregations except filters and terms. It also reduces the methods on the Bucket class to just getKey() and getKeyAsString().
The getKey() method returns Object and the actual Type is returns will be appropriate for the type of aggregation being run. e.g. date_histogram will return a DateTime for this method and Histogram will return a Number.
Related to #9049.
By default, the default value for `timestamp` is `now` which means the date the document was processed by the indexing chain.
You can now reject documents which not provide a `timestamp` value by setting `ignore_missing` to false (default to `true`):
```js
{
"tweet" : {
"_timestamp" : {
"enabled" : true,
"ignore_missing" : false
}
}
}
```
When you update the cluster to 1.5 or master, this index created with 1.4 we automatically migrate an index created with 1.4 to the 1.5 syntax.
Let say you have defined this in elasticsearch 1.4.x:
```js
DELETE test
PUT test
{
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0
}
}
PUT test/type/_mapping
{
"type" : {
"_timestamp" : {
"enabled" : true,
"default" : null
}
}
}
```
After migration, the mapping become:
```js
{
"test": {
"mappings": {
"type": {
"_timestamp": {
"enabled": true,
"store": false,
"ignore_missing": false
},
"properties": {}
}
}
}
}
```
Closes#8882.
This adds a new boolean (index.merge.scheduler.auto_throttle) dynamic
setting, default true (matching Lucene), to adaptively set the IO rate
limit for merges over time.
This is more flexible than the previous fixed rate throttling because
it responds depending on the incoming merge rate, so search-heavy
applications that are not doing much indexing will see merges heavily
throttled while indexing-heavy cases will lighten the throttle so
merges can keep up within incoming indexing.
The fixed rate throttling is still available as a fallback if things
go horribly wrong.
Closes#9243Closes#9133
Today we give the HTTP status back within the HTTP response itself and within the JSON response as well:
```sh
curl localhost:9200/
```
```js
{
"status" : 200,
"name" : "Red Wolf",
"version" : {
"number" : "2.0.0",
"build_hash" : "6837a61d8a646a2ac7dc8da1ab3c4ab85d60882d",
"build_timestamp" : "2014-08-19T13:55:56Z",
"build_snapshot" : true,
"lucene_version" : "4.9"
},
"tagline" : "You Know, for Search"
}
```
Before Elasticsearch 1.0, the type was allowed to be passed as the root
element when uploading a document. However, this was ambiguous if the
mappings also contained a field with the same name as the type. The
behavior was changed in 1.0 to not allow this, but a setting was added
for backwards compatibility. This change removes the setting for 2.0.
The header indicates to how many shard copies (primary and replicas shards) a write was supposed to go to, to how many
shard copies to write succeeded and potentially captures shard failures if writing into a replica shard fails.
For async writes it also includes the number of shards a write is still pending.
Closes#7994
additional element per segment.
This commit adds a verbose flag to the _segments api. Currently the
only additional information returned when set to true is the full
ram tree from lucene for each segment.
This is our only cache which is not 'exact' and might allow for stalled results.
Additionally, a similar cache that we have and needs to perform lookups in other
indices in order to run queries is the script index, and for this index we rely
on the filesystem cache, so we should probably do the same with terms filters
lookups.
Close#9056
The `cluster.routing.allocation.balance.primary` setting has caused
a lot of confusion in the past while it has very little benefit form a
shard allocatioon point of view. Users tend to modify this value to
evently distribute primaries across the nodes which is dangerous since
a prmiary flag on it's own can trigger relocations. The primary flag for a shard
is should not have any impact on cluster performance unless the high level feature
suffereing from primary hotspots is buggy. Yet, this setting was intended to be a
tie-breaker which is not necessary anymore since the algorithm is deterministic.
This commit removes this setting entriely.
In some situations the shard balanceing weight delta becomes negative. Yet,
a negative delta is always treated as `well balanced` which is wrong. I wasn't
able to reproduce the issue in any way other than useing the real world data
from issue #9023. This commit adds a fix for absolute deltas as well as a base
test class that allows to build tests or simulations from the cat API output.
Closes#9023
This feature adds an optional orientation parameter to the GeoJSON document and geo_shape mapping enabling users to explicitly define how they want Elasticsearch to interpret vertex ordering. The default uses the right-hand rule (counterclockwise for outer ring, clockwise for inner ring) complying with OGC Simple Feature Access standards. The parameter can be explicitly specified for an entire index using the geo_shape mapping by adding "orientation":{"left"|"right"|"cw"|"ccw"|"clockwise"|"counterclockwise"} and/or overridden on each insert by adding the same parameter to the GeoJSON document.
closes#8764
Up to now, all filters could be cached using the `_cache` flag that could be
set to `true` or `false` and the default was set depending on the type of the
`filter`. For instance, `script` filters are not cached by default while
`terms` are. For some filters, the default is more complicated and eg. date
range filters are cached unless they use `now` in a non-rounded fashion.
This commit adds a 3rd option called `auto`, which becomes the default for
all filters. So for all filters a cache wrapper will be returned, and the
decision will be made at caching time, per-segment. Here is the default logic:
- if there is already a cache entry for this filter in the current segment,
then return the cache entry.
- else if the doc id set cannot iterate (eg. script filter) then do not cache.
- else if the doc id set is already cacheable and it has been used twice or
more in the last 1000 filters then cache it.
- else if the filter is costly (eg. multi-term) and has been used twice or more
in the last 1000 filters then cache it.
- else if the doc id set is not cacheable and it has been used 5 times or more
in the last 1000 filters, then load it into a cacheable set and cache it.
- else return the uncached set.
So for instance geo-distance filters and script filters are going to use this
new default and are not going to be cached because of their iterators.
Similarly, date range filters are going to use this default all the time, but
it is very unlikely that those that use `now` in a not rounded fashion will get
reused so in practice they won't be cached.
`terms`, `range`, ... filters produce cacheable doc id sets with good iterators
so they will be cached as soon as they have been used twice.
Filters that don't produce cacheable doc id sets such as the `term` filter will
need to be used 5 times before being cached. This ensures that we don't spend
CPU iterating over all documents matching such filters unless we have good
evidence of reuse.
One last interesting point about this change is that it also applies to compound
filters. So if you keep on repeating the same `bool` filter with the same
underlying clauses, it will be cached on its own while up to now it used to
never be cached by default.
`_cache: true` has been changed to only cache on large segments, in order to not
pollute the cache since small segments should not be the bottleneck anyway.
However `_cache: false` still has the same semantics.
Close#8449
Add a new ignore_idle_threads boolean option (default true) to
/_nodes/hot_threads, to filter out threads in known idle places like
waiting on a socket select or on pulling the next task from an empty
queue.
Closes#8985Closes#8908
The setting `mapping.date.round_ceil` (and the undocumented setting
`index.mapping.date.parse_upper_inclusive`) affect how date ranges using
`lte` are parsed. In #8556 the semantics of date rounding were
solidified, eliminating the need to have different parsing functions
whether the date is inclusive or exclusive.
This change removes these legacy settings and improves the tests
for the date math parser (now at 100% coverage!). It also removes the
unnecessary function `DateMathParser.parseTimeZone` for which
the existing `DateTimeZone.forID` handles all use cases.
Any user previously using these settings can refer to the changed
semantics and change their query accordingly. This is a breaking change
because even dates without datemath previously used the different
parsing functions depending on context.
closes#8598closes#8889
I replaced "high frequent terms" with "high frequency terms" and "low frequent terms" with "low frequency terms".
Alternatively, we could write, "highly frequent terms" and "minimally frequent terms" (or just "rare terms").
Closes#8962
We only have a single gatweway since es 1.3. There is no need to keep all
these abstractsion and nested packages. We can fold most of it into simpler
structures.
If you use the java-package tool to create java packages, those
paths also should be added to the debian init script.
Also updated the docs, that it is ok to install java8.
Closes#7383
This change adds a 'http.publish_port' setting to the HTTP module to configure
the port which HTTP clients should use when communicating with the node. This
is useful when running on a bridged network interface or when running behind
a proxy or firewall.
Closes#8807Closes#8137
Upgrades lucene to latest, and supports the BEST_COMPRESSION parameter
now supported (with backwards compatibility, etc) in Lucene.
This option uses deflate, tuned for highly compressible data.
index.codec::
The default value compresses stored data with LZ4 compression, but
this can be set to best_compression for a higher compression ratio,
at the expense of slower stored fields performance.
IMO its safest to implement as a named codec here, because ES already
has logic to handle this correctly, and because its unrealistic to have
a plethora of options to Lucene's default codec... we are practically
limited in Lucene to what we can support with back compat, so I don't
think we should overengineer this and add additional unnecessary plumbing.
See also:
https://issues.apache.org/jira/browse/LUCENE-5914https://issues.apache.org/jira/browse/LUCENE-6089https://issues.apache.org/jira/browse/LUCENE-6090https://issues.apache.org/jira/browse/LUCENE-6100Closes#8863
1. Enable the repository using "add-apt-repository" to avoid this error "No command 'deb' found".
2. Adding "sudo" to update and install command.
Closes#8691
Related to #8667:
Some QueryBuilders have been deprecated in 1.x branches. We removed them in 2.0.
Removed
-------
* `textPhrase(...)`
* `textPhrasePrefix(...)`
* `textPhrasePrefixQuery(...)`
* `filtered(...)`
* `inQuery(...)`
* `commonTerms(...)`
* `queryString(...)`
* `simpleQueryString(...)`
Closes#8721.
I'm not sure if the `distance-units` section is totally clear, when using the 'Geohash Cell Filter' and omitting a unit, the default is to interpret the integer as the 'length of the geohash prefix', not to default it to 'meter'. Maybe I'm being pedantic.
Closes#8744
Inner hits allows to embed nested inner objects, children documents or the parent document that contributed to the matching of the returned search hit as inner hits, which would otherwise be hidden.
Closes#8153Closes#3022Closes#3152
Adds a `ignore_like` parameter to the MLT Query, which simply tells the
algorithm to skip all the terms from the given documents. This could be useful
in order to better guide nearest neighbor search by telling the algorithm to
never explore the space spanned by the given `ignore_like` docs. In essence we
are interested about the characteristic of a given item, but not of the ones
provided by `ignore_like`, thereby forcing the algorithm to go deeper in its
selection of terms. Note that this is different than simply performing a must
not boolean query on the unliked items. The syntax is exactly the same as the
`like` parameter.
Closes#8674
functon_score matched each document regardless of the computed score.
This commit adds a query parameter `min_score` (-Float.MAX_VALUE default).
Documents that have a score lower than this threshold will not be mached.
closes#6952
Today, Elasticsearch has a separate merge thread pool checking once
per second (by default) if any merges are necessary, but this is no
longer necessary since we can and do now tell Lucene's
ConcurrentMergeScheduler never to "hard pause" threads when merges
fall behind, since we do our own index throttling.
This change goes back to letting Lucene launch merges as needed, and
removes these two expert settings:
index.merge.force_async_merge
index.merge.async_interval
Now merges kick off immediately instead of waiting up to 1 second
before running.
Closes#8643
Always use the LocalGateway* equivalents
We already check in the LocalGateway whether a node is a client node, or
is not master-eligible, and skip writing the state there. This allows us
to remove this code that was previously used only for tribe nodes (which
are not master eligible anyway and wouldn't write state) and in
tests (which can shake more bugs out)
Elasticsearch no longer unlocks the Lucene index on startup (this was
dangerous, and could possibly lead to corruption).
Added the new serbian_normalization TokenFilter from Lucene.
NoLockFactory is no longer supported (index.store.fs.fs_lock = none),
and if you have a typo in your fs_lock you'll now hit a StoreException
instead of silently using NoLockFactory.
Closes#8588