If the filter producing the FBS were to be cached then it would flip bits each time the NonNestedDocsFilter was executed.
In our case we cache the NonNestedDocsFilter itself so that didn't happen each time this filter was used.
However the hashcode of NonNestedDocsFilter and NestedDocsFilter was the same, since NonNestedDocsFilter directly used NestedDocsFilter#hashCode()
Closes#8227Closes#8232
The test validates that minimum master node is honored after shutting down nodes. When nodes are restarted we may go through a couple of master election of a master is elected and discovers some of the old nodes left before all new node have joined. Processing that node failure the master de-elects it self, which is fine because we will start a new master election. Test however runs a clusterHealth call with a wait for event. This is implemented using a cluster state update task which fails when the master steps down. Longer term fix requires a rewrite of the cluster health API code but a quick fix is not check for events (not needed for this test).
This commit adds a new field to the response of the terms aggregation called
`sum_other_doc_count` which is equal to the sum of the doc counts of the buckets
that did not make it to the list of top buckets. It is typically useful to have
a sector called eg. `other` when using terms aggregations to build pie charts.
Example query and response:
```json
GET test/_search?search_type=count
{
"aggs": {
"colors": {
"terms": {
"field": "color",
"size": 3
}
}
}
}
```
```json
{
[...],
"aggregations": {
"colors": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 4,
"buckets": [
{
"key": "blue",
"doc_count": 65
},
{
"key": "red",
"doc_count": 14
},
{
"key": "brown",
"doc_count": 3
}
]
}
}
}
```
Close#8213
Add source_node and target_node fields to the recovery cat API. Also fixed and updated the documentation which was not complete concerning fields names.
Closes#8041
- Added parseBooleanExact in booleans which throws exception in case of
parse failure
- Used ParseExact in static's to make it consistent
+ code review fixes
- Added unit test
- Changed Exception Type to ElasticSearchIllegalArg from Parse
- used isExplicit*
Closes#8097
initialize ports.
JUnit uses an instance per test which caused the prot range to be
initialized twice since suite level tests are not configured in a
different context.
The cluster for `Scope.SUITE` tests must be initialize in a static manner
before the first test runs otherwise the random context used to initialize
the cluster is taken from tests randomness rather than the suites randomness.
This means test clusters will have different setups if only a single test is
executed or even the test might have a entirely different random sequence.
This is due to a bug in older version causing refreshes to potentially be missed due to relocations #6545
Also:
- Changed test to keep track of ids and report missing ones.
- Removed total count check from assertSearchHits in order to enable per id checks in cased of a mismatch
- Added a printable unique id part the ids of dummy documents added by indexRandom. The current random unicode id
sometimes prints as ???? to the logs making them hard to trace
The MLT query has a lot of parameters. For example, a set of documents is
specified with either `like_text`, `ids` or `docs`, with at least one
parameter required. This commit groups all the document specification
parameters under one called `like`. The syntax is described below and could
easily be extended to allow for new means of specifying document input. The
`like_text`, `ids` and `docs` parameters are deprecated.
As a single piece text:
{
"query": {
"more_like_this": {
"like": "some text here"
}
}
}
As a single item:
{
"query": {
"more_like_this": {
"like": {
"_index": "imdb",
"_type": "movies",
"_id": "88247"
}
}
}
}
Or as a mixture of all:
{
"query": {
"more_like_this": {
"like": [
"Some random text ...",
{
"_index": "imdb",
"_type": "movies",
"_id": "88247"
},
{
"_index": "imdb",
"_type": "movies",
"doc": {
"title": "Document with an artificial title!"
}
}
]
}
}
}
Closes#8039
When reading the [rolling upgrade process](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-upgrade.html#rolling-upgrades), you can see that we wrote:
* disable allocation
* upgrade node1
* upgrade node2
* upgrade node3
* ...
* enable allocation
That won't work as after a node has been removed and restarted, no shard will be allocated anymore.
So closing node2 and remaining nodes, won't help to serve index and search request anymore.
We should write:
* disable allocation
* upgrade node1
* enable allocation
* wait for shards being recovered on node1
* disable allocation
* upgrade node2
* enable allocation
* wait for shards being recovered on node2
* disable allocation
* upgrade node3
* enable allocation
* wait for shards being recovered on node3
* disable allocation
* ...
* enable allocation
I think this documentation update should go in 1.3, 1.4, 1.x and master branches.
Closes#8218Closes#7973.
If a bulk request contains a mix of indexing requests for an existing index and one that needs to be auto-created but a cluster configuration prevents the auto-create of the new index the ingest process hangs. The exception for the failure to create an index was not caught or reported back properly. Added a Junit test to recreate the issue and the associated fix is in TransportBulkAction.
Closes#8125
When reading metadata we do catch FileNotFound and NoSuchFileExceptions
today, log the even and return an empty metadata object. Yet, in some cases
this might be the wrong thing todo ie. if a commit point is provided these
situations are actually an error and should be rethrown. This commit
pushes the responsiblity to the caller to handle this exception.
Closes#8207
The concurrency level allows to configure the cache internal segments
used to cache data. This can have direct impact on evicition rates since
memory bound caches are equally divided into segments which can cause
early evictions if cache entries are not well balanced.
Relates to #7836
We already have two places duplicating this rather hairy logic, this
commit intorduces a new RefCoutned interace and an abstract implementation
that can be used for delegation. It factors out all the reference counting
and adds single and multithreaded test for it.
Closes#8210
Windows can throw NoSuchFileException when using File.walkFileTree and deleting files concurrently. This commit changes IO exceptions into assertion error so that assertBusy will wait for them as well.
This commit rewrites the state controls in the RecoveryTarget family classes to make it easier to guarantee that:
- recovery resources are only cleared once there are no ongoing requests
- recovery is automatically canceled when the target shard is closed/removed
- canceled recoveries do not leave temp files behind when canceled.
Highlights of the change:
1) All temporary files are cleared upon failure/cancel (see #7315 )
2) All newly created files are always temporary
3) Doesn't list local files on the cluster state update thread (which throw unwanted exception)
4) Recoveries are canceled by a listener to IndicesLifecycle.beforeIndexShardClosed, so we don't need to explicitly call it.
5) Simplifies RecoveryListener to only notify when a recovery is done or failed. Removed subtleties like ignore and retry (they are dealt with internally)
Closes#8092 , Closes#7315
This commit adds the ability to enable / disable relocations
on an entire cluster or on individual indices for either:
* `primaries` - only primaries can rebalance
* `replica` - only replicas can rebalance
* `all` - everything can rebalance (default)
* `none` - all rebalances are disabled
similar to the allocation enable / disable functionality.
Relates to #7288
If `fielddata_fields` are passed as a simple value instead of an array
we end up in an infinite loop createing parsed elements with null
values.
This commit validates the incoming token
Closes#8203
Since we enabled the disk threshold decider by default, we need to
enable the cluster info service so that disk usages and shard sizes can
be gathered also.
Adds a test that checks that we are gathering information by default.
With this change, the elasticsearch script can be linked to another path
without having to set ES_INCLUDE to match the installation path.
Previously, the elasticsearch would find ES_HOME correctly even if linked
but could not find the include script, and finding it would be expected
behavior to me based on its current search path.
Closes#4958
This commit adds throttle stats to the indexing stats and uses a call back from InternalEngine to manage the stats.
Also includes updates the IndexStatsTests to test for these new stats.
Stats added :
```
throttle_time_in_millis
is_throttled
```
Closes#7861
We used to handle FNF exceptions in the store when reading a snapshot.
For instance if we can't open a segments file for a given commit point
we just return an empty metadata object and tracelog the even. This can
cause shards to be false marked as corrupted if a shard is forcefully
removed while a recovery started at the same time. We should in general
bubble up these exceptions and let the caller decided how to handle the
IOExceptions.
The issue with making it dynamic is that in the event a cluster is
switched from a noop to a concrete implementation, there may be
in-flight requests, once these requests complete we adjust the breaker
with a negative number and trip an assertion.
This also rarely uses noop breakers in InternalTestCluster
Previously, the leniency was on a per-query basis, with each query being
parsed into multiple queries, one for each field. If any one of these
queries failed, the entire query was discarded in the name of being
lenient.
Now query parts will only be discarded if they fail for a particular
field, the entire query is not discarded. This helps when performing a
query over a numeric and string field, as only the sub-queries that are
invalid due to format exceptions will be discarded.
Also moves the `simple_query_string` queries out of SimpleQueryTests and
into a dedicated SimpleQueryStringTests class.
Fixes#7967
The live docs that is passed down was ignored by the filter impl. Now the children filter gets wrapped with ApplyAcceptedDocsFilter, so live docs are actually applied.
Closes#8180
The IndicesWarmer gets set before the InternalIndexService gets set, which can lead to a small time window were InternalIndexService isn't set
Closes#8140Closes#8168