The main `elasticsearch.yml` file mixed configuration, documentation
and advice together.
Due to a much improved documentation at <http://www.elastic.co/guide/>,
the content has been trimmed, and only the essential settings have
been left, to prevent the urge to excessive over-configuration.
Related: 8d0f1a7d123f579fc772e82ef6b9aae08f6d13fd
Expose new span queries from https://issues.apache.org/jira/browse/LUCENE-6083
Within returns matches from 'little' that are enclosed inside of a match from 'big'.
Containing returns matches from 'big' that enclose matches from 'little'.
Added infrastructure to allow basic member methods in the expressions
language to be called. The methods must have a signature with no arguments. Also
added the following member methods for date fields (and it should be easy to add more)
* getYear
* getMonth
* getDayOfMonth
* getHourOfDay
* getMinutes
* getSeconds
Allow fields to be accessed without using the member variable [value].
(Note that both ways can be used to access fields for back-compat.)
closes#10890
Using files that must be specified on each node is an anti-pattern
from the API based goal of ES. This change removes the ability
to specify the default mapping with a file on each node.
closes#10620
In order to safely complete recoveries / relocations we have to keep all operation done since the recovery start at available for replay. At the moment we do so by preventing the engine from flushing and thus making sure that the operations are kept in the translog. A side effect of this is that the translog keeps on growing until the recovery is done. This is not a problem as we do need these operations but if the another recovery starts concurrently it may have an unneededly long translog to replay. Also, if we shutdown the engine for some reason at this point (like when a node is restarted) we have to recover a long translog when we come back.
To void this, the translog is changed to be based on multiple files instead of a single one. This allows recoveries to keep hold to the files they need while allowing the engine to flush and do a lucene commit (which will create a new translog files bellow the hood).
Change highlights:
- Refactor Translog file management to allow for multiple files.
- Translog maintains a list of referenced files, both by outstanding recoveries and files containing operations not yet committed to Lucene.
- A new Translog.View concept is introduced, allowing recoveries to get a reference to all currently uncommitted translog files plus all future translog files created until the view is closed. They can use this view to iterate over operations.
- Recovery phase3 is removed. That phase was replaying operations while preventing new writes to the engine. This is unneeded as standard indexing also send all operations from the start of the recovery to the recovering shard. Replay all ops in the view acquired in recovery start is enough to guarantee no operation is lost.
- IndexShard now creates the translog together with the engine. The translog is closed by the engine on close. ShadowIndexShards do not open the translog.
- Moved the ownership of translog fsyncing to the translog it self, changing the responsible setting to `index.translog.sync_interval` (was `index.gateway.local.sync`)
Closes#10624
Only parent filters should use bitset filter cache, to avoid memory being wasted.
Also in case of object fields inline the field name into the nested object,
instead of creating an additional (dummy) nested identity.
Closes#10662Closes#10629
The assumption is that gaps in histogram are generally undesirable, for instance
if you want to build a visualization from it. Additionally, we are building new
aggregations that require that there are no gaps to work correctly (eg.
derivatives).
When doc values were turned on a by default, most meta fields
had it explicitly disabled. However, _field_names was missed.
This change forces doc values to be off always for _field_names
and removes the unnecessary support when creating index fields.
closes#10892
Adds support for calculating and sending diffs instead of full cluster state of the most frequently changing elements - cluster state, meta data and routing table.
Closes#6295
If you define exactly the same date range query using either `DATE+0200` notation or `DATE` and set `timezone: +0200`, elasticsearch gives back different results:
```
DELETE foo
PUT /foo
{
"mapping": {
"tweets": {
"properties": {
"tweet_date": {
"type": "date"
}
}
}
}
}
POST /foo/tweets/1/
{
"tweet_date": "2015-04-05T23:00:00+0000"
}
POST /foo/tweets/2/
{
"tweet_date": "2015-04-06T00:00:00+0000"
}
GET /foo/tweets/_search?pretty
{
"query": {
"query_string": {
"query": "tweet_date:[2015-04-06T00:00:00+0200 TO 2015-04-06T23:00:00+0200]"
}
}
}
GET /foo/tweets/_search?pretty
{
"query": {
"query_string": {
"query": "tweet_date:[2015-04-06T00:00:00 TO 2015-04-06T23:00:00]",
"time_zone": "+0200"
}
}
}
```
This PR fixes it and will also allow us to add the same feature to simple_query_string as well in another PR.
Closes#10477.
(cherry picked from commit 880f4a0)
`IndiceStore#indexCleanup` uses a disruption scheme to delay cluster state
processing. Yet, the delay is [1..2] seconds but tests are setting the shard
deletion timeout to 1 second to speed up tests. This can cause random not
reproducible failures in this test since the timeouts and delays are bascially
overlapping. This commit adds a longer timeout for this test to prevent these
problems.
Adds a new type of aggregation called 'reducers' which act on the output of aggregations and compute extra information that they add to the aggregation tree. Reducers look much like any other aggregation in the request but have a buckets_path parameter which references the aggregation(s) to use.
Internally there are two types of reducer; the first is given the output of its parent aggregation and computes new aggregations to add to the buckets of its parent, and the second (a specialisation of the first) is given a sibling aggregation and outputs an aggregation to be a sibling at the same level as that aggregation.
This PR includes the framework for the reducers, the derivative reducer (#9293), the moving average reducer(#10002) and the maximum bucket reducer(#10000). These reducer implementations are not all yet fully complete.
Known work left to do (these points will be done once this PR is merged into the master branch):
Add x-axis normalisation to the derivative reducer
Add lots more JUnit tests for all reducers
Contributes to #9876Closes#10002Closes#9293Closes#10000