We currently run always with SecurityManager installed. To make sure we
work also without we should randomly swap it out ie. run without the
security manager.
Tests fail once in a while because of a ClassCastException at the mvel level.
We suspect that this happens because a script that is JIT-ed on a specific
data type cannot later be used with another one, but we didn't manage to
reproduce in our development environments, so let's try to change the field
names to see if this error keeps occurring.
We rely on the `cluster.name` setting to be the same across all nodes
and transport clients etc. If a node setting contains `cluster.name`
the test might not work if a random transport client is swapped
in. Passing such a configuration should result in an exception since
it's clearly an illegal state.
If one asks for `http://es:9200/_plugin/PLUGIN_NAME/` and the the plugin's _site directory contains an index.html file, it will be correctly served.
This is not the case for sub directories: a _site/folder/index.html is not served when requesting `http://es:9200/_plugin/PLUGIN_NAME/folder/` but one gets a 403 Forbidden response as if trying to browse the folder.
Closes#4845.
Detects if rescores arrive as an array instead of a plain object. If so
then parse each element of the array as a separate rescore to be executed
one after another. It looks like this:
"rescore" : [ {
"window_size" : 100,
"query" : {
"rescore_query" : {
"match" : {
"field1" : {
"query" : "the quick brown",
"type" : "phrase",
"slop" : 2
}
}
},
"query_weight" : 0.7,
"rescore_query_weight" : 1.2
}
}, {
"window_size" : 10,
"query" : {
"score_mode": "multiply",
"rescore_query" : {
"function_score" : {
"script_score": {
"script": "log10(doc['numeric'].value + 2)"
}
}
}
}
} ]
Rescores as a single object are still supported.
Closes#4748
It happens to be the case that the iteration order of a HashMaps
keyset might be different across runs. This can cause undeterministic
results in shard balancing if weights are identical and multiple shards
of the same index are eligable for relocation. This commit adds
a tie-breaker based on the shard ID to prioritise the lowest shard
ID. This also makes `AddIncrementallyTests#testAddNodesAndIndices`
reproducible.
Closes#4867
We need to remove the reproduce info printer after the suite
returns otherwise it might print a bogus line if a subsequent
non-rest test fails. The `RunNotifier` is used across suites in
the same JVM and the listener sticks to it.
We currently pull in the lucene-expression module that is referenced
by lucene-suggest. Yet, we don't make use of this dependency at all
and it pulls in a bunch of unshaded libs like `antlr` and `asm` which
are pretty common in other projects. We should exclude this
dependency since we don't use it at all and it causes problems
when Elasticsearch is used as a node client. (see #4858)
If we mark the dependency as provided it won't be included in the
distribution.
Closes#4859Closes#4858
Removed unused misc.asciidoc file
Added plugins directory to directory layout
Fixed transport.tcp.connect_timeout value to match the code found in NetworkService.TcpSettings
Clarified that phrase query does not preserve order of terms
Clarified merge page
Added instructions on how to build documentation to docs/README
This removes the overhead of polling a Bytes/Double/Long-Values instance in
every call to collect.
Additionally, the AggregationsCollector has been changed to wrap a simple array
instead of an ArrayList.
Close#4841
Terms aggregations return up to `size` terms, so up to now, the way to get all
matching terms back was to set `size` to an arbitrary high number that would be
larger than the number of unique terms.
Terms aggregators already made sure to not allocate memory based on the `size`
parameter so this commit mostly consists in making `0` an alias for the
maximum integer value in the TermsParser.
Close#4837
The way `HistogramAggregator` works is that for every value, it is going to
compute a rounded value, that basically looks like
`(value / interval) * interval` and use it as a key in a hash table to
aggregate counts.
However, the exact rounded value is not needed yet at that stage, all we need
is a value that uniquely identifies the bucket, such as `(value / interval)`.
We could only multiply with `interval` again when building the bucket: this way
the second step is only performed once per bucket instead of once per value.
Although this looks like a micro optimization for the case that was just
decribed, it makes more sense with the date rounding implementations that we
have that are more CPU-intensive.
Close#4800
Adds a new FetchSubPhase, FieldDataFieldsFetchSubPhase, which loads the
field data cache for a field and returns an array of values for the
field.
Also removes `doc['<field>']` and `_source.<field>` workaround no longer
needed in field name resolving.
Closes#4492