Tests fail once in a while because of a ClassCastException at the mvel level.
We suspect that this happens because a script that is JIT-ed on a specific
data type cannot later be used with another one, but we didn't manage to
reproduce in our development environments, so let's try to change the field
names to see if this error keeps occurring.
We rely on the `cluster.name` setting to be the same across all nodes
and transport clients etc. If a node setting contains `cluster.name`
the test might not work if a random transport client is swapped
in. Passing such a configuration should result in an exception since
it's clearly an illegal state.
If one asks for `http://es:9200/_plugin/PLUGIN_NAME/` and the the plugin's _site directory contains an index.html file, it will be correctly served.
This is not the case for sub directories: a _site/folder/index.html is not served when requesting `http://es:9200/_plugin/PLUGIN_NAME/folder/` but one gets a 403 Forbidden response as if trying to browse the folder.
Closes#4845.
Detects if rescores arrive as an array instead of a plain object. If so
then parse each element of the array as a separate rescore to be executed
one after another. It looks like this:
"rescore" : [ {
"window_size" : 100,
"query" : {
"rescore_query" : {
"match" : {
"field1" : {
"query" : "the quick brown",
"type" : "phrase",
"slop" : 2
}
}
},
"query_weight" : 0.7,
"rescore_query_weight" : 1.2
}
}, {
"window_size" : 10,
"query" : {
"score_mode": "multiply",
"rescore_query" : {
"function_score" : {
"script_score": {
"script": "log10(doc['numeric'].value + 2)"
}
}
}
}
} ]
Rescores as a single object are still supported.
Closes#4748
It happens to be the case that the iteration order of a HashMaps
keyset might be different across runs. This can cause undeterministic
results in shard balancing if weights are identical and multiple shards
of the same index are eligable for relocation. This commit adds
a tie-breaker based on the shard ID to prioritise the lowest shard
ID. This also makes `AddIncrementallyTests#testAddNodesAndIndices`
reproducible.
Closes#4867
We need to remove the reproduce info printer after the suite
returns otherwise it might print a bogus line if a subsequent
non-rest test fails. The `RunNotifier` is used across suites in
the same JVM and the listener sticks to it.
This removes the overhead of polling a Bytes/Double/Long-Values instance in
every call to collect.
Additionally, the AggregationsCollector has been changed to wrap a simple array
instead of an ArrayList.
Close#4841
Terms aggregations return up to `size` terms, so up to now, the way to get all
matching terms back was to set `size` to an arbitrary high number that would be
larger than the number of unique terms.
Terms aggregators already made sure to not allocate memory based on the `size`
parameter so this commit mostly consists in making `0` an alias for the
maximum integer value in the TermsParser.
Close#4837
The way `HistogramAggregator` works is that for every value, it is going to
compute a rounded value, that basically looks like
`(value / interval) * interval` and use it as a key in a hash table to
aggregate counts.
However, the exact rounded value is not needed yet at that stage, all we need
is a value that uniquely identifies the bucket, such as `(value / interval)`.
We could only multiply with `interval` again when building the bucket: this way
the second step is only performed once per bucket instead of once per value.
Although this looks like a micro optimization for the case that was just
decribed, it makes more sense with the date rounding implementations that we
have that are more CPU-intensive.
Close#4800
Adds a new FetchSubPhase, FieldDataFieldsFetchSubPhase, which loads the
field data cache for a field and returns an array of values for the
field.
Also removes `doc['<field>']` and `_source.<field>` workaround no longer
needed in field name resolving.
Closes#4492
From elasticsearch 0.90.6, when you have templates files defined in `config/templates` dir, rivers don't start anymore.
Steps to reproduce:
Create `config/templates/default.json`:
```javascript
{
default:
{
"template" : "*",
"mappings" : {
"_default_" : {
}
}
}
}
```
Start a dummy river:
```sh
curl -XPUT 'localhost:9200/_river/my_river/_meta' -d '{ "type" : "dummy" }'
```
It gives:
```
[2014-01-01 22:08:38,151][INFO ][cluster.metadata ] [Forge] [_river] creating index, cause [auto(index api)], shards [1]/[1], mappings [_default_]
[2014-01-01 22:08:38,239][INFO ][river.routing ] [Forge] no river _meta document found, retrying in 1000 ms
[2014-01-01 22:08:38,245][INFO ][cluster.metadata ] [Forge] [_river] update_mapping [my_river] (dynamic)
[2014-01-01 22:08:38,250][INFO ][river.routing ] [Forge] no river _meta document found, retrying in 1000 ms
[2014-01-01 22:08:39,244][INFO ][river.routing ] [Forge] no river _meta document found, retrying in 1000 ms
[2014-01-01 22:08:39,252][INFO ][river.routing ] [Forge] no river _meta document found, retrying in 1000 ms
[2014-01-01 22:08:40,246][INFO ][river.routing ] [Forge] no river _meta document found, retrying in 1000 ms
[2014-01-01 22:08:40,254][INFO ][river.routing ] [Forge] no river _meta document found, retrying in 1000 ms
[2014-01-01 22:08:41,246][INFO ][river.routing ] [Forge] no river _meta document found, retrying in 1000 ms
[2014-01-01 22:08:41,255][INFO ][river.routing ] [Forge] no river _meta document found, retrying in 1000 ms
[2014-01-01 22:08:42,249][WARN ][river.routing ] [Forge] no river _meta document found after 5 attempts
[2014-01-01 22:08:42,257][WARN ][river.routing ] [Forge] no river _meta document found after 5 attempts
```
With elasticsearch 0.90.2 or with no template file in `config/templates` dir, it gives:
```
[2014-01-01 22:22:32,096][INFO ][cluster.metadata ] [Forge] [_river] creating index, cause [auto(index api)], shards [1]/[1], mappings []
[2014-01-01 22:22:32,221][INFO ][cluster.metadata ] [Forge] [_river] update_mapping [my_river] (dynamic)
[2014-01-01 22:22:32,228][INFO ][river.dummy ] [Forge] [dummy][my_river] create
[2014-01-01 22:22:32,228][INFO ][river.dummy ] [Forge] [dummy][my_river] start
[2014-01-01 22:22:32,234][INFO ][cluster.metadata ] [Forge] [_river] update_mapping [my_river] (dynamic)
```
Closes#4577.
Closes#4656.
StringFieldMapper.toXContent uses the defaults for analyzed fields in order to
know which options to add to the builder. This means that if the field is not
analyzed and has norms enabled, it will omit to emit `norms.enabled: true`.
Parsing the mapping again will result in a StringFieldMapper that has norms
disabled.
The same fix applies to index options.
Close#4760