Filters and Queries now supports `time_zone` parameter which defines which time zone should be applied to the query or filter to convert it to UTC time based value.
When applied on `date` fields the `range` filter and queries accept also a `time_zone` parameter.
The `time_zone` parameter will be applied to your input lower and upper bounds and will move them to UTC time based date:
[source,js]
--------------------------------------------------
{
"constant_score": {
"filter": {
"range" : {
"born" : {
"gte": "2012-01-01",
"lte": "now",
"time_zone": "+1:00"
}
}
}
}
}
{
"range" : {
"born" : {
"gte": "2012-01-01",
"lte": "now",
"time_zone": "+1:00"
}
}
}
--------------------------------------------------
In the above examples, `gte` will be actually moved to `2011-12-31T23:00:00` UTC date.
NOTE: if you give a date with a timezone explicitly defined and use the `time_zone` parameter, `time_zone` will be
ignored. For example, setting `from` to `2012-01-01T00:00:00+01:00` with `"time_zone":"+10:00"` will still use `+01:00` time zone.
Closes#3729.
Our transport relies on action names that tell what we need to do with each message received and sent on any node, together with the content of the request itself.
The action names could use a better categorization and more consistent naming though, the following are the categories introduced with this commit:
- indices: for all the apis that execute against indices
- admin: for the apis that allow to perform administration tasks against indices
- data: for the apis that are about data
- read: apis that read data
- write: apis that write data
- benchmark: apis that run benchmarks
- cluster: for all the cluster apis
- admin: for the cluster apis that allow to perform administration tasks
- monitor: for the cluster apis that allow to monitor the system
- internal: for all the internal actions that are used from node to node but not directly exposed to users
The change is applied in a backwards compatible manner: we keep the mapping old-to-new action name around, and when receiving a message, depending on the version of the node we receive it from, we use the received action name or we convert it to the previous version (old to new if version < 1.4). When sending a message, depending on the version of the node we talk to, we use the updated action or we convert it to the previous version (new to old if version < 1.4).
For the cases where we don't know the version of the node we talk to, namely unicast ping, transport client nodes info and transport client sniff mode (which calls cluster state), we just use a lower bound for the version, thus we will always use the old action name, which can be understood by both old nodes and new nodes.
Added test that enforces known updated categories for transport action names and test that verifies all action names have a pre 1.4 version for bw compatibility
Added backwards compatibility tests for unicast and transport client in sniff mode, the one for the ordinary transport client (which calls nodes info) is implicit as it's used all the time in our bw comp tests.
Added also backwards comp test that sends an empty message to any of the registered transport handler exposed by older nodes and verifies that what gets back is not ActionNotFoundTransportException, which would mean that there is a problem in the actions mappings.
Added TestCluster#getClusterName abstract method and allow to retrieve externalTransportAddress and internalCluster from CompositeTestCluster.
Closes#7105
If simultaneous create & delete operations arrive against the same id,
it's possible that primary and replica see those operations in
different orders, which may result in replica throwing
DocumentAlreadyExistsException when the primary didn't which would
lead to replica being inconsistent (missing a document that primary
had indexed).
This push fixes the issue, by never throwing DAEE from the replica on
create.
Closes#7146#7142
Made sure that the routing required check is performed against the concrete index, added use of aliases to existing routing tests.
Taken the change to unify the failure message as well to this form: routing is required for [" + index + "]/[" + type + "]/[" + id + "]
Closes#7145
Before the index reader used by the percolator didn't allow to register a CoreCloseListener, but now it does, making it safe to cache index field data cache entries.
Creating field data structures is relatively expensive and caching them can save a lot of noise if many queries are evaluated in a percolator call.
Closes#6806Closes#7081
Fields of type `token_count`, `murmur3`, `_all` and `_field_names` are generated only when indexing.
If a GET requests accesses the transaction log (because no refresh
between indexing and GET request) then these fields cannot be retrieved at all.
Before the behavior was so:
`_all, _field_names`: The field was siletly ignored
`murmur3, token_count`: `NumberFormatException` because GET tried to parse the values from the source.
In addition, if these fields were not stored, the same behavior occured if the fields were
retrieved with GET after a `refresh()` because here also the source was used to get the fields.
Now, GET accepts a parameter `ignore_errors_on_generated_fields` which has
the following effect:
- Throw exception with meaningful error message explaining the problem if set to false (default)
- Ignore the field if set to true
- Always ignore the field if it was not set to stored
This changes the behavior for `_all` and `_field_names` as now an Exception is thrown if a user
tries to GET them before a `refresh()`.
closes#6676closes#6973
Allow to set the value default to network.tcp.no_delay and network.tcp.keep_alive so they won't be set at all, since on solaris, setting tcpNoDelay can actually cause failure
relates to #7115
CliTool is a base class for command-line interface tools (such as the plugin manager and potentially others). It supports the following:
- single or multi command tool
- help printing infrastructure (based on help files)
- consistent mechanism of parsing arguments (based on commons-cli lib)
- separation of argument parsing and command execution (for easier unit testing)
- terminal abstraction (will use System.console() when available)
A multi-bucket aggregation where multiple filters can be defined (each filter defines a bucket). The buckets will collect all the documents that match their associated filter.
This aggregation can be very useful when one wants to compare analytics between different criterias. It can also be accomplished using multiple definitions of the single filter aggregation, but here, the user will only need to define the sub-aggregations only once.
Closes#6118
Now that we have explicit support for aliases when creating indices and as part of index templates, we may remove support for aliases (only names) as part of index settings. This is partially breaking as the following calls:
curl -XPUT localhost:9200/index -d '{
"settings" : {
"aliases" : [ "alias1"]
}
}
and
curl -XPUT localhost:9200/index -d '{
"settings" : {
"index.aliases" : [ "alias1"]
}
}
were previously supported and will need to be replaced with
curl -XPUT localhost:9200/index -d '{
"aliases" : {
"alias1": {}
}
}
Closes#5545
The histogram reduce method can run into an infinite loop if the
Rounding.nextRoundingValue value is buggy, which happened to be the case for
DayTimeZoneRoundingFloor.
DayTimeZoneRoundingFloor is fixed, and the histogram reduce method has been
changed to fail instead of running into an infinite loop in case of a buffy
nextRoundingValue impl.
Close#6965
Today, `copy_to` always copies a field to the current document, which is often
wrong in the case of nested documents. For example, if you have a nested field
called `n` which has a sub-field `n.source` whose content should be copied to
`target`, then the latter field should be created in the root document instead
of the nested one, since it doesn't have `n.` as a prefix. On the contrary, if
you configure the destination field to be `n.target`, then it should go to the
nested document.
Close#6701
Implements a new Exists API allowing users to do fast exists check on any matched documents for a given query.
This API should be faster then using the Count API as it will:
- early terminate the search execution once any document is found to exist
- return the response as soon as the first shard reports matched documents
closes#6995
Index process fails when having `_timestamp` enabled and `path` option is set.
It fails with a `TimestampParsingException[failed to parse timestamp [null]]` message.
Reproduction:
```
DELETE test
PUT test
{
"mappings": {
"test": {
"_timestamp" : {
"enabled" : "yes",
"path" : "post_date"
}
}
}
}
PUT test/test/1
{
"foo": "bar"
}
```
You can define a default value for when timestamp is not provided
within the index request or in the `_source` document.
By default, the default value is `now` which means the date the document was processed by the indexing chain.
You can disable that default value by setting `default` to `null`. It means that `timestamp` is mandatory:
```
{
"tweet" : {
"_timestamp" : {
"enabled" : true,
"default" : null
}
}
}
```
If you don't provide any timestamp value, indexation will fail.
You can also set the default value to any date respecting timestamp format:
```
{
"tweet" : {
"_timestamp" : {
"enabled" : true,
"format" : "YYYY-MM-dd",
"default" : "1970-01-01"
}
}
}
```
If you don't provide any timestamp value, indexation will fail.
Closes#4718.
Closes#7036.
when there is a cluster block (like no master yet discovered), the bulk action doesn't properly catch the exception of inner execute to notify the listener, causing the bulk operation to hang
closes#7086
This commit adds a profile that skips all validation ie.
- nocommit / tabs checking
- forbidden API checks
- license headers
It's not active by default but can easily be activated with
`mvn -Pdev` or in the `~/.m2/settings.xml`
for reference see:
http://maven.apache.org/guides/introduction/introduction-to-profiles.html
The used -p option could result in accidentally deleting more directories
than /var/lib/elasticsearch - so this option was removed
Note: This only happens if the directories are empty, but still isnt needed.
Relates #5770
The upgrade tests required to specify the lower and the upper version
that it should run against. This commit adds support for random picks
if either the lower or the upper or both are not specified.