The scripted metric aggregation is now a PER_BUCKET aggregation so that parent buckets are evaluated independently. Also the params and reduceParams are copied for each instance of the aggregator (each parent bucket) so modifications to the values are kept only within the scope of its parent bucket
Closes#8036
* `get_upgrade` => `GET _upgrade` -- Return the status
* `upgrade` => `POST _upgrade` -- Perform the operation
Original specification part of c021f22523.
Related: #7884, #7922
short[] were mistakenly encoded as a float[]. This is not an issue for the
text-based xcontents that we have (yaml, json) since floats can represent any
short value and are serialized as strings. However, this will make the binary
xcontents serialize shorts as int instead of floats.
Close#7845
By letting the fetch phase understand the nested docs structure we can serve nested docs as hits.
The `top_hits` aggregation can because of this commit be placed in a `nested` or `reverse_nested` aggregation.
Closes#7164
In some cases a shard search request gets created on a node to be only used there and never sent over the transport. This commit clarifies that and creates a new base class called `ShardSearchLocalRequest` that can and will be only used locally. `ShardSearchTransportRequest` on the other hand delegates to the local version but extends `TransportRequest` and is `Streamable`, which means that it is supposed to be sent over the transport.
This way we can make the `OriginalIndices` only required (and mandatory now) in the transport variant.
Took the chance to remove an unused InternalScrollSearchRequest constructor and an empty else branch in `TransportSearchScrollQueryAndFetchAction`.
Closes#7855
This patch adds to `_cat/indices` information about memory usage per index by adding memory used by FieldData, IdCache, Percolate, Segments (memory, index writer, version map).
```
% curl 'localhost:9200/_cat/indices?v&h=i,tm'
i tm
wiki 8.1gb
test 30.5kb
user 1.9mb
```
Closes#7008
* Run flush in beforeIndexShardClosed to prevent an empty shard.
* Only run check index if the shard state before closing was: started, relocated or post_recovery
This commit does the following:
* Add the new API at the rest layer, being backed by the optimize API
with upgrade flag, and segments api to find upgrade status.
* Add `upgrade` flag to optimize API, and deprecate `force` flag (will
remove in master)
* Add test for both synchronous and async upgrade
closes#7884closes#7922
When removing and installing again the plugin all configuration files will be removed in `config/pluginname` dir.
This is bad as users may have set and added specific configuration files.
During an install, if we detect already existing files in `config/pluginname` directory, we simply copy the new file to the same dir but we append a `.new` at the end.
Related to #5064.
(cherry picked from commit 5da028f)
(cherry picked from commit 4cb1f95)
The currently used method `testRunStarted` is only called before any tests have been run, we need to reset that state before each test, that's why we need to use `testStarted`.
The logging configuration was expected in the path.home folder which is set to
target/JX
when running the bwc tests from the console.
Therefore the logger could not be initialized with error message:
[INFO] Failed to configure logging...
org.elasticsearch.ElasticsearchException: Failed to load logging configuration
at org.elasticsearch.common.logging.log4j.LogConfigurator.resolveConfig(LogConfigurator.java:117)
at org.elasticsearch.common.logging.log4j.LogConfigurator.configure(LogConfigurator.java:81)
at org.elasticsearch.bootstrap.Bootstrap.setupLogging(Bootstrap.java:96)
at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:180)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)
Caused by: java.nio.file.NoSuchFileException: /home/britta/es/target/J0/config
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
at sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
at sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:97)
at java.nio.file.Files.readAttributes(Files.java:1686)
at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:109)
at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:69)
at java.nio.file.Files.walkFileTree(Files.java:2602)
at org.elasticsearch.common.logging.log4j.LogConfigurator.resolveConfig(LogConfigurator.java:107)
... 4 more
log4j:WARN No appenders could be found for logger (node).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Setting the config directory fixes this.
Logs from external nodes are still not printed properly. They are inserted to the log
whenever the stdout is printed ([WARNING] JVM J0: stdout was not empty...)
closes#7964
Before this change the write consistency change was performed on the node that receives the write request and the node that holds the primary shard. This change removes the check on the node that receives the request, since it is redundant.
Also this change moves the write consistency check on the node that holds the primary shard to a later moment after forking of the thread to perform the actual write on the primary shard.
Closes#7873
When the date format is defined in mapping, you can not use another format when querying using range date query or filter.
For example, this won't work:
```
DELETE /test
PUT /test/t/1
{
"date": "2014-01-01"
}
GET /test/_search
{
"query": {
"filtered": {
"filter": {
"range": {
"date": {
"from": "01/01/2014"
}
}
}
}
}
}
```
It causes:
```
Caused by: org.elasticsearch.ElasticsearchParseException: failed to parse date field [01/01/2014], tried both date format [dateOptionalTime], and timestamp number
```
It could be nice if we can support at query time another date format just like we support `analyzer` at search time on String fields.
Something like:
```
GET /test/_search
{
"query": {
"filtered": {
"filter": {
"range": {
"date": {
"from": "01/01/2014",
"format": "dd/MM/yyyy"
}
}
}
}
}
}
```
Same for queries:
```
GET /test/_search
{
"query": {
"range": {
"date": {
"from": "01/01/2014",
"format": "dd/MM/yyyy"
}
}
}
}
```
Closes#7189.
This change removes the script_type parameter form the Scripted Metric Aggregation and adds support for _file and _id suffixes to the init_script, map_script, combine_script and reduce_script parameters to make defining the source of the script consistent with the other APIs which use the ScriptService
This adds a `per_field_analyzer` parameter to the Term Vectors API, which
allows to override the default analyzer at the field. If the field already
stores term vectors, then they will be re-generated. Since the MLT Query uses
the Term Vectors API under its hood, this commits also adds the same ability
to the MLT Query, thereby allowing users to fine grain how each field item
should be processed and analyzed.
Closes#7801
The randomisation that deletes documents is also removed from tests as this doc-accounting change would mean the specific scores being expected in tests would now be subject to random variability and so fail.
Closes#7951
When asking for `GET /_cat/indices?v`, you can now retrieve closed indices in addition to opened ones.
```
health status index pri rep docs.count docs.deleted store.size pri.store.size
yellow open .marvel-2014.05.21 1 1 8792 0 21.7mb 21.7mb
close test
yellow open .marvel-2014.05.22 1 1 3871 0 10.7mb 10.7mb
red open .marvel-2014.05.27 1 1
```
Closes#7907.
Closes#7936.
`took` is computed based on the system clock and can be negative if the clock
time was updated during the execution of the search request. This commit
protects against these cases by replacing `took` with 1 if the elapsed time is
negative.
Close#7968
By default term vectors are now realtime, as opposed to previously near
realtime. If they are not found in the index, they will be generated on the
fly. The document is fetched from the transaction log and treated as an
artificial document. One can set `realtime` parameter to `false` in order to
disable this functionality. This consequently makes the MLT query realtime in
fetching documents, as it previsouly used to be before switching from using
the multi get API to the mtv API.
Closes#7846
Currently DEBUG logs can get very verbose because IndicesClusterStateService logs the complete mapping with every mapping update. We should suppress it if long in DEBUG mode and always log the full one in TRACE.
Closes#7949
When sending a multicast ping, there is no way to determine how long it will take before all nodes will respond. Currently we send two pings (one at start, one after half timeout) and wait until the ping timeout has passed for all responses to come back. However, if all nodes are fast to respond, there is a gap relatively large between the moment that pings were gathered and the election that is based on them. This commits adds a last ping round (at timeout) where we know the number of nodes we expect to receive answers from. Once all nodes responded, we complete the pinging.
Closes#7924
Due to component start order we may process an incoming ping while the ZenDiscovery module is not yet started. This leads to exception (from which we recover correctly, but the logs are note nice). UnicastZenPing should only start processing pings if it is started. We previously processed if not closed or stopped.
Closes#7950
This commit makes the lookup structures that are used for mappings immutable.
When changes are required, a new instance is created while the current instance
is left unmodified. This is done efficiently thanks to a hash table
implementation based on a array hash trie, see
org.elasticsearch.common.collect.CopyOnWriteHashMap.
ManyMappingsBenchmark returns indexing times that are similar to the ones that
can be observed in current master.
Ultimately, I would like to see if we can make mappings completely immutable as
well and updated atomically. This is not trivial however, eg. because of dynamic
mappings. So here is a first baby step that should help move towards that
direction.
Close#7486