In our REST tests we already have support for features and skip sections that allow to skip tests if a feature is not supported.
We can then add a skip section based on the benchmark feature to the benchmark tests and execute them only when they are supported, knowing that they need at least a node with node.bench settings within the cluster. We can check that this requirement is met by calling the nodes info api.
This way we can dynamically decide whether to execute those tests or not and we don't need to have a node.bench around all the time. In fact, given that the REST tests use the GLOBAL cluster, we want to be able to randomize settings as much as possible and run tests against default settings as well. Also, this mechanism can be easily supported by the external cluster implementation that is used during the release process.
Introduced ability to disable benchmark nodes which is needed by BenchmarkNegativeTest.
The test failed because 'percent_terms_to_match' defaults to 0.3, which results
in requiring that some terms only found in the queried document must match, when
all the documents are on the same shard.
By default More Like This API excludes the queried document from the response.
However, when debugging or when comparing scores across different queries, it
could be useful to have the best possible matched hit. So this option lets users
explicitly specify the desired behavior.
Closes#6067
In the Google Groups forum there appears to be some confusion as to what mlt
does. This documentation update should hopefully help demystifying this
feature, and provide some understanding as to how to use its parameters.
Closes#6092
* The shardTopDocs array should get created with the size equal to the total number of shard level requests and not the total number of requests that have a shard level result.
* Make sure no null TopDocs entires are passed down to TopDocs#merge
* Added dedicated scroll tests that tests scrolling on an index that has missing shards due to node failure.
* Made sure that the sort fields in SimpleNestedTests exists by adding the fields in the mapping during index creation.
Closes#6022
- Fix bug where repeatedly calling computeSummaryStatistics() could
accumulate some values incorrectly.
- Fix check for number of responsive nodes on list is <= number of
candidate benchmark nodes.
- Add public getters for summary statistics
- Add javadoc for new getters
- Add javadoc comments about API use
- Randomized integration tests for the benchmark API.
- Negative tests for cases where the cluster cannot run benchmarks.
- Return 404 on missing benchmark name.
- Allow to specify 'types' as an array in the JSON syntax when describing a benchmark competition.
- Don't record slowest for single-request competitions.
Closes#6003, #5906, #5903, #5904
This fixes a stack overflow in the test for the _cat/recovery API.
The regular expression that tests the response body was modified to
handle large responses properly.
The rest test for _cat/allocation was failing due to a regular
expression not accounting for space-padded right-justified text.
Also added Improvements to regular expressions to be smarter about optional values
and to use '+' instead of '*' where applicable.
Adds a table with the exhaustive list of all available headers with a brief description (mostly from `org.elasticsearch.rest.action.cat.RestNodesAction`) so that people do not need to go searching for them in the code like I did, or search through `nodes?help`.
Significant terms internally maintain a priority queue per shard with a size potentially
lower than the number of terms. This queue uses the score as criterion to determine if
a bucket is kept or not. If many terms with low subsetDF score very high
but the `min_doc_count` is set high, this might result in no terms being
returned because the pq is filled with low frequent terms which are all sorted
out in the end.
This can be avoided by increasing the `shard_size` parameter to a higher value.
However, it is not immediately clear to which value this parameter must be set
because we can not know how many terms with low frequency are scored higher that
the high frequent terms that we are actually interested in.
On the other hand, if there is no routing of docs to shards involved, we can maybe
assume that the documents of classes and also the terms therein are distributed evenly
across shards. In that case it might be easier to not add documents to the pq that have
subsetDF <= `shard_min_doc_count` which can be set to something like
`min_doc_count`/number of shards because we would assume that even when summing up
the subsetDF across shards `min_doc_count` will not be reached.
closes#5998closes#6041
Relates to #6059, where two new constants were introduced in IndicesOptions. There were already two constants there though, one of which we could have reused. This commit tries to unify them.
We currently compute initial sizings based on the cardinality of our fields.
This can be highly exagerated for sub aggregations, for example if there is a
parent terms aggregation that is executed over a field that has a very long
tail: most buckets will only collect a couple of documents.
Close#5994
We switched to Lucene's SloppyMath way of computing an approximate value of
the eath diameter given a latitude in order to compute distances, yet the
bounding box optimization of the geo distance filter still assumed a constant
earth diameter, equal to the average.
Close#6008
This relates to #6040, the fix is twofold, first, not handling missing context specifically in the search code, but behave the same as we do in non scroll search, where if all the shards failed, raise an exception. The second is to apply this logic in both scroll cases.
Update `geo-shape-type.asciidoc` to include all `GeoShapeType`s supported by the `org.elasticsearch.common.geo.builders.ShapeBuilder`.
Changes include:
1. A tabular mapping of GeoJSON types to Elasticsearch types
2. Listing all types, with brief examples, for all support Elasticsearch types
3. Putting non-standard types to the bottom (really just moving Envelope to the bottom)
4. Linking to all GeoJSON types.
5. Adding whitespace around tightly nested arrays (particularly `multipolygon`) for readability