Adds support for calculating and sending diffs instead of full cluster state of the most frequently changing elements - cluster state, meta data and routing table.
Closes#6295
If you define exactly the same date range query using either `DATE+0200` notation or `DATE` and set `timezone: +0200`, elasticsearch gives back different results:
```
DELETE foo
PUT /foo
{
"mapping": {
"tweets": {
"properties": {
"tweet_date": {
"type": "date"
}
}
}
}
}
POST /foo/tweets/1/
{
"tweet_date": "2015-04-05T23:00:00+0000"
}
POST /foo/tweets/2/
{
"tweet_date": "2015-04-06T00:00:00+0000"
}
GET /foo/tweets/_search?pretty
{
"query": {
"query_string": {
"query": "tweet_date:[2015-04-06T00:00:00+0200 TO 2015-04-06T23:00:00+0200]"
}
}
}
GET /foo/tweets/_search?pretty
{
"query": {
"query_string": {
"query": "tweet_date:[2015-04-06T00:00:00 TO 2015-04-06T23:00:00]",
"time_zone": "+0200"
}
}
}
```
This PR fixes it and will also allow us to add the same feature to simple_query_string as well in another PR.
Closes#10477.
(cherry picked from commit 880f4a0)
`IndiceStore#indexCleanup` uses a disruption scheme to delay cluster state
processing. Yet, the delay is [1..2] seconds but tests are setting the shard
deletion timeout to 1 second to speed up tests. This can cause random not
reproducible failures in this test since the timeouts and delays are bascially
overlapping. This commit adds a longer timeout for this test to prevent these
problems.
Adds a new type of aggregation called 'reducers' which act on the output of aggregations and compute extra information that they add to the aggregation tree. Reducers look much like any other aggregation in the request but have a buckets_path parameter which references the aggregation(s) to use.
Internally there are two types of reducer; the first is given the output of its parent aggregation and computes new aggregations to add to the buckets of its parent, and the second (a specialisation of the first) is given a sibling aggregation and outputs an aggregation to be a sibling at the same level as that aggregation.
This PR includes the framework for the reducers, the derivative reducer (#9293), the moving average reducer(#10002) and the maximum bucket reducer(#10000). These reducer implementations are not all yet fully complete.
Known work left to do (these points will be done once this PR is merged into the master branch):
Add x-axis normalisation to the derivative reducer
Add lots more JUnit tests for all reducers
Contributes to #9876Closes#10002Closes#9293Closes#10000
Extend SearchParseException and QueryParsingException to report position information in query JSON where errors were found. All query DSL parser classes that throw these exception types now pass the underlying position information (line and column number) at the point the error was found.
Closes#3303
source parameter is implicitly supported and doesn't need to be declared in rest spec. It is tested though, as every api that supports get with body can also get requests using POST with body or get with source query_string parameter.
When a node was a data node only then the index state was not written.
In case this node connected to a master that did not have the index
in the cluster state, for example because a master was restarted and
the data folder was lost, then the indices were not imported as dangling
but instead deleted.
This commit makes sure that index state for data nodes is also written
if they have at least one shard of this index allocated.
closes#8823closes#9952
this commit removes the obsolete settings for distributors and updates
the documentation on multiple data.path. It also adds an explain to the
migration guide.
Relates to #9498Closes#10770
This commit removes unused thorws statements when RuntimeExceptions are
mentioned in the throws statement. It also removes obsolete import statements
for java.lang.IllegalArgumentException and java.lang.IllegalStateException
Regardless of the outcome of #8142, we should at least enforce that
when _source is enabled, it is sufficient to reindex. This change
removes the excludes and includes settings, since these modify
the source, causing us to lose the ability to reindex some fields.
closes#10814
The code to parse a document was spread across 3 different classes,
and depended on traversing the ObjectMapper hiearchy. This change
consolidates all the doc parsing code into a new DocumentParser.
This should allow adding unit tests (future issue) for document
parsing so the logic can be simplified. All code was copied
directly for this change with only minor modifications to make
it work within the new location.
closes#10802
Multi fields currently parse any field type passed in. However, they
were only intended to support copying simple values from the outter
field. This change adds validation to ensure object and nested
fields are not used within multi fields.
closes#10745
now that delete by query is out, we don't need this infrastructure code. The delete by query will be implenented as a plugin, with scan scroll + bulk delete, so it will not need this infra anyhow
The current implementation is dangerous: it unexpectedly refreshes,
which can quickly cause an unhealthy index (segment explosion). It
can also delete different documents on primary vs replicas, causing
inconsistent replicas.
For 2.0 we will replace this with an optional plugin that does a
scan/scroll search and then issues bulk delete requests.
Closes#10859
During pinging we open light , temporary connections to the the unicast hosts. After the pinging is done we close those. At the moment we do so before returning the results of the pings to the caller. On the other hand, in our transport logic we acquire a lock specific to the node id while opening a connection. When disconnecting from node, we have to acquire the same lock in order to guarantee the the connection opening has finished. This can cause big delays in environments where opening a connection is very slow, as the connection closing has to wait *after* the pinging was done.. This can be problematic as it causes master election to use stale data.
Closes#10849