The plain highligher fails when it tries to select the fragments based on a query containing either a `has_child` or `has_parent` query.
The plain highligher should just ignore parent/child queries as it makes no sense to highligh a parent match with a has_child as the child documents are not available at highlight time. Instead if child document should be highlighed inner hits should be used.
Parent/child queries already have no effect when the `fvh` or `postings` highligher is used. The test added in this commit verifies that.
Closes#14999
This change adds a second ParseField for the `aggs` field in the search
request so both `aggregations` and `aggs` are undeprecated allowed
fields in the search request
Closes#19504
The current heuristic to compute a default shard size is pretty aggressive,
it returns `max(10, number_of_shards * size)` as a value for the shard size.
I think making it less aggressive has the benefit that it would reduce the
likelyness of running into OOME when there are many shards (yearly
aggregations with time-based indices can make numbers of shards in the
thousands) and make the use of breadth-first more likely/efficient.
This commit replaces the heuristic with `size * 1.5 + 10`, which is enough
to have good accuracy on zipfian distributions.
Add an assertion to the most popular way of turning the response object
into the actual http response. As it stands all places we return
`201 CREATED` we return the `Location` header. This will help to keep it
that way, though it won't catch all uses.
Followup to #19509
When you simulate a pipeline without specifying an id against a node where the request is redirected to a master node,
the request and the response is throwing a NPE:
```
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([3B9536AC6AA23C06:DD62280CF765DA1F]:0)
at org.elasticsearch.common.io.stream.StreamOutput.writeString(StreamOutput.java:300)
at org.elasticsearch.action.ingest.SimulatePipelineRequest.writeTo(SimulatePipelineRequest.java:92)
at org.elasticsearch.transport.local.LocalTransport.sendRequest(LocalTransport.java:222)
at org.elasticsearch.test.transport.AssertingLocalTransport.sendRequest(AssertingLocalTransport.java:95)
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:470)
at org.elasticsearch.action.TransportActionNodeProxy.execute(TransportActionNodeProxy.java:51)
at org.elasticsearch.client.transport.support.TransportProxyClient.lambda$execute$441(TransportProxyClient.java:63)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:233)
at org.elasticsearch.client.transport.support.TransportProxyClient.execute(TransportProxyClient.java:63)
at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:309)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:403)
at org.elasticsearch.client.FilterClient.doExecute(FilterClient.java:67)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:403)
at org.elasticsearch.client.support.AbstractClient$ClusterAdmin.execute(AbstractClient.java:710)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:80)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:54)
at org.elasticsearch.action.ActionRequestBuilder.get(ActionRequestBuilder.java:62)
at org.elasticsearch.ingest.bano.BanoProcessorIntegrationTest.testSimulateProcessorConfigTarget(BanoProcessorIntegrationTest.java:139)
```
This patch fixes this and adds some random tests.
* fix explain in function_score if no function filter matches
When each function in function_score has a filter but none of them matches
we always assume 1 for the combined functions and then combine that with the
sub query score.
But the explanation did not reflect that because in case no function matched
we did not even use the actual score that was computed in the explanation.
The reason for this change is that currently if a user specifies e.g.`2M`
meaning 2 months as a time value instead of throwing an exception
explaining that time units in months are not supported (due to months
having variable time spans) we instead will parse this to 2 minutes.
This could be surprising to a user and could mean put a lot of load on
the cluster performing a task that was never intended and whose results
will be useless anyway.
It is generally accepted that `m` indicates minutes and `M` indicates
months with time values so this is consistent with the expectations a
user might have around specifying time units.
A concrete example of where this causes issues is in the decay score
function which uses TimeValue to parse the scale and offset parameters
of the decay into millisecond values to use in the calculation.
Relates to #19619
Currently to use the ResourceWatcherService to watch files, you
implement a FileChangesListener. However, this is a class, not an
interface, even though it has no base state or anything like that, just
defining a few methods. This change converts FileChangesListener to an
interface.
The tests for authentication extend ESIntegTestCase and use a mock
authentication plugin. This way the clients don't have to worry about
running it. Sadly, that means we don't really have good coverage on the
REST portion of the authentication.
This also adds ElasticsearchStatusException, and exception on which you
can set an explicit status. The nice thing about it is that you can
set the RestStatus that it returns to whatever arbitrary status you like
based on the status that comes back from the remote system.
reindex-from-remote then uses it to wrap all remote failures, preserving
the status from the remote Elasticsearch or whatever proxy is between us
and the remove Elasticsearch.
It's possible that the shard has been closed but the resources
associated with it have not yet been released. This waits until the
index lock can be obtained before running the tool.
This test fails because of an unknown exceptions in FsService.stats() method, which causes no stats to be returned. With this change the exception that is causing this issue is going to be logged.
Related to #19591 and #17964
This makes it obvious that these tests are for running the client yaml
suites. Now that there are other ways of running tests using the REST
client against a running cluster we can't go on calling the shared
client yaml tests "REST tests". They are rest tests, but they aren't
**the** rest tests.
This adds an extra REST handler for "_ingest/pipeline" so that users do not need to supply "_ingest/pipeline/*" to get all of them.
- Also adds a teardown section to related REST-tests for ingest.
Performing the bulk request shown in #19267 now results in the following:
```
{"_index":"test","_type":"test","_id":"1","_version":1,"_operation":"create","forced_refresh":false,"_shards":{"total":2,"successful":1,"failed":0},"status":201}
{"_index":"test","_type":"test","_id":"1","_version":1,"_operation":"noop","forced_refresh":false,"_shards":{"total":2,"successful":1,"failed":0},"status":200}
```
This adds the `bin/elasticsearch-translate` bin file that will be used
for CLI tasks pertaining to Elasticsearch. Currently it implements only
a single sub-command, `truncate-translog`, that creates a truncated
translog for a given folder.
Here's what running the tool looks like:
```
λ bin/elasticsearch-translog truncate -d data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/
Checking existing translog files
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! WARNING: Elasticsearch MUST be stopped before running this tool !
! !
! WARNING: Documents inside of translog files will be lost !
! !
! WARNING: The following files will be DELETED! !
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-10.tlog
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-18.tlog
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-21.tlog
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-12.ckp
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-25.ckp
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-29.tlog
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-2.tlog
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-5.tlog
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-41.ckp
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-6.ckp
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-37.ckp
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-24.ckp
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-11.ckp
Continue and DELETE files? [y/N] y
Reading translog UUID information from Lucene commit from shard at [data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/index]
Translog Generation: 3
Translog UUID : AxqC4rocTC6e0fwsljAh-Q
Removing existing translog files
Creating new empty checkpoint at [data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog.ckp]
Creating new empty translog at [data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-3.tlog]
Done.
```
It also includes a `-b` batch operation that can be used to skip the
confirmation diaglog.
Resolves#19123
This change adds a new special path to the buckets_path syntax
`_bucket_count`. This new option will return the number of buckets for a
multi-bucket aggregation, which can then be used in pipeline
aggregations.
Closes#19553
When the request body is missing, all documents in the target index are counted.
As mentioned in #19422, the same should happen when the request body is an empty
json object. This is also the behaviour for the `_search` endpoint and the two
APIs should behave in the same way.
Before the aggregation tree was traversed to figure out what the parent level is, this commit
changes that by using `NestedScope` to figure out the nested depth level. The big upsides
are that this cleans up `NestedAggregator` (it used a hack to lazily figure out the nested parent filter)
and this is also what `nested` query uses and therefor the `nested` query can be included inside `nested`
aggregation and work correctly.
Closes#11749Closes#12410
This adds a header that looks like `Location: /test/test/1` to the
response for the index/create/update API. The requirement for the header
comes from https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.htmlhttps://tools.ietf.org/html/rfc7231#section-7.1.2 claims that relative
URIs are OK. So we use an absolute path which should resolve to the
appropriate location.
Closes#19079
This makes large changes to our rest test infrastructure, allowing us
to write junit tests that test a running cluster via the rest client.
It does this by splitting ESRestTestCase into two classes:
* ESRestTestCase is the superclass of all tests that use the rest client
to interact with a running cluster.
* ESClientYamlSuiteTestCase is the superclass of all tests that use the
rest client to run the yaml tests. These tests are shared across all
official clients, thus the `ClientYamlSuite` part of the name.
This adds new circuit breaking with the "request" breaker, which adds
circuit breaks based on the number of buckets created during
aggregations. It consists of incrementing during AggregatorBase creation
This also bumps the REQUEST breaker to 60% of the JVM heap now.
The output when circuit breaking an aggregation looks like:
```json
{
"shard" : 0,
"index" : "i",
"node" : "a5AvjUn_TKeTNYl0FyBW2g",
"reason" : {
"type" : "exception",
"reason" : "java.util.concurrent.ExecutionException: QueryPhaseExecutionException[Query Failed [Failed to execute main query]]; nested: CircuitBreakingException[[request] Data too large, data for [<agg [otherthings]>] would be larger than limit of [104857600/100mb]];",
"caused_by" : {
"type" : "execution_exception",
"reason" : "QueryPhaseExecutionException[Query Failed [Failed to execute main query]]; nested: CircuitBreakingException[[request] Data too large, data for [<agg [myagg]>] would be larger than limit of [104857600/100mb]];",
"caused_by" : {
"type" : "circuit_breaking_exception",
"reason" : "[request] Data too large, data for [<agg [otherthings]>] would be larger than limit of [104857600/100mb]",
"bytes_wanted" : 104860781,
"bytes_limit" : 104857600
}
}
}
}
```
Relates to #14046
Messy tests with mustache were either moved to core, moved to a rest test or remained untouched if they actually tested mustache.
Also removed tests that were redundant.
After #13834 many tests that used Groovy scripts (for good or bad reason) in their tests have been moved in the lang-groovy module and the issue #13837 has been created to track these messy tests in order to clean them up.
This commit moves more tests back in core, removes the dependency on Groovy, changes the scripts in order to use the mocked script engine, and change the tests to integration tests.
This change renames the package org.elasticsearch.search.fetch.fielddata in org.elasticsearch.search.fetch.docvalues and renames the
FieldData* classes in DocValue*. This is a follow up of the renaming that happened in #18943
With #19140 we started persisting the node ID across node restarts. Now that we have a "stable" anchor, we can use it to generate a stable default node name and make it easier to track nodes over a restarts. Sadly, this means we will not have those random fun Marvel characters but we feel this is the right tradeoff.
On the implementation side, this requires a bit of juggling because we now need to read the node id from disk before we can log as the node node is part of each log message. The PR move the initialization of NodeEnvironment as high up in the starting sequence as possible, with only one logging message before it to indicate we are initializing. Things look now like this:
```
[2016-07-15 19:38:39,742][INFO ][node ] [_unset_] initializing ...
[2016-07-15 19:38:39,826][INFO ][node ] [aAmiW40] node name set to [aAmiW40] by default. set the [node.name] settings to change it
[2016-07-15 19:38:39,829][INFO ][env ] [aAmiW40] using [1] data paths, mounts [[ /(/dev/disk1)]], net usable_space [5.5gb], net total_space [232.6gb], spins? [unknown], types [hfs]
[2016-07-15 19:38:39,830][INFO ][env ] [aAmiW40] heap size [1.9gb], compressed ordinary object pointers [true]
[2016-07-15 19:38:39,837][INFO ][node ] [aAmiW40] version[5.0.0-alpha5-SNAPSHOT], pid[46048], build[473d3c0/2016-07-15T17:38:06.771Z], OS[Mac OS X/10.11.5/x86_64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_51/25.51-b03]
[2016-07-15 19:38:40,980][INFO ][plugins ] [aAmiW40] modules [percolator, lang-mustache, lang-painless, reindex, aggs-matrix-stats, lang-expression, ingest-common, lang-groovy, transport-netty], plugins []
[2016-07-15 19:38:43,218][INFO ][node ] [aAmiW40] initialized
```
Needless to say, settings `node.name` explicitly still works as before.
The commit also contains some clean ups to the relationship between Environment, Settings and Plugins. The previous code suggested the path related settings could be changed after the initial Environment was changed. This did not have any effect as the security manager already locked things down.