upgrades if it determines the read data is in the legacy
format. It writes the upgraded version if it is not a
read-only repository and caches the repository data if
it is a read-only repository.
Add an assertion to the most popular way of turning the response object
into the actual http response. As it stands all places we return
`201 CREATED` we return the `Location` header. This will help to keep it
that way, though it won't catch all uses.
Followup to #19509
When you simulate a pipeline without specifying an id against a node where the request is redirected to a master node,
the request and the response is throwing a NPE:
```
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([3B9536AC6AA23C06:DD62280CF765DA1F]:0)
at org.elasticsearch.common.io.stream.StreamOutput.writeString(StreamOutput.java:300)
at org.elasticsearch.action.ingest.SimulatePipelineRequest.writeTo(SimulatePipelineRequest.java:92)
at org.elasticsearch.transport.local.LocalTransport.sendRequest(LocalTransport.java:222)
at org.elasticsearch.test.transport.AssertingLocalTransport.sendRequest(AssertingLocalTransport.java:95)
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:470)
at org.elasticsearch.action.TransportActionNodeProxy.execute(TransportActionNodeProxy.java:51)
at org.elasticsearch.client.transport.support.TransportProxyClient.lambda$execute$441(TransportProxyClient.java:63)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:233)
at org.elasticsearch.client.transport.support.TransportProxyClient.execute(TransportProxyClient.java:63)
at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:309)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:403)
at org.elasticsearch.client.FilterClient.doExecute(FilterClient.java:67)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:403)
at org.elasticsearch.client.support.AbstractClient$ClusterAdmin.execute(AbstractClient.java:710)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:80)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:54)
at org.elasticsearch.action.ActionRequestBuilder.get(ActionRequestBuilder.java:62)
at org.elasticsearch.ingest.bano.BanoProcessorIntegrationTest.testSimulateProcessorConfigTarget(BanoProcessorIntegrationTest.java:139)
```
This patch fixes this and adds some random tests.
* fix explain in function_score if no function filter matches
When each function in function_score has a filter but none of them matches
we always assume 1 for the combined functions and then combine that with the
sub query score.
But the explanation did not reflect that because in case no function matched
we did not even use the actual score that was computed in the explanation.
The reason for this change is that currently if a user specifies e.g.`2M`
meaning 2 months as a time value instead of throwing an exception
explaining that time units in months are not supported (due to months
having variable time spans) we instead will parse this to 2 minutes.
This could be surprising to a user and could mean put a lot of load on
the cluster performing a task that was never intended and whose results
will be useless anyway.
It is generally accepted that `m` indicates minutes and `M` indicates
months with time values so this is consistent with the expectations a
user might have around specifying time units.
A concrete example of where this causes issues is in the decay score
function which uses TimeValue to parse the scale and offset parameters
of the decay into millisecond values to use in the calculation.
Relates to #19619
Currently to use the ResourceWatcherService to watch files, you
implement a FileChangesListener. However, this is a class, not an
interface, even though it has no base state or anything like that, just
defining a few methods. This change converts FileChangesListener to an
interface.
The tests for authentication extend ESIntegTestCase and use a mock
authentication plugin. This way the clients don't have to worry about
running it. Sadly, that means we don't really have good coverage on the
REST portion of the authentication.
This also adds ElasticsearchStatusException, and exception on which you
can set an explicit status. The nice thing about it is that you can
set the RestStatus that it returns to whatever arbitrary status you like
based on the status that comes back from the remote system.
reindex-from-remote then uses it to wrap all remote failures, preserving
the status from the remote Elasticsearch or whatever proxy is between us
and the remove Elasticsearch.
It's possible that the shard has been closed but the resources
associated with it have not yet been released. This waits until the
index lock can be obtained before running the tool.
This test fails because of an unknown exceptions in FsService.stats() method, which causes no stats to be returned. With this change the exception that is causing this issue is going to be logged.
Related to #19591 and #17964
This makes it obvious that these tests are for running the client yaml
suites. Now that there are other ways of running tests using the REST
client against a running cluster we can't go on calling the shared
client yaml tests "REST tests". They are rest tests, but they aren't
**the** rest tests.
This adds an extra REST handler for "_ingest/pipeline" so that users do not need to supply "_ingest/pipeline/*" to get all of them.
- Also adds a teardown section to related REST-tests for ingest.
Performing the bulk request shown in #19267 now results in the following:
```
{"_index":"test","_type":"test","_id":"1","_version":1,"_operation":"create","forced_refresh":false,"_shards":{"total":2,"successful":1,"failed":0},"status":201}
{"_index":"test","_type":"test","_id":"1","_version":1,"_operation":"noop","forced_refresh":false,"_shards":{"total":2,"successful":1,"failed":0},"status":200}
```
This adds the `bin/elasticsearch-translate` bin file that will be used
for CLI tasks pertaining to Elasticsearch. Currently it implements only
a single sub-command, `truncate-translog`, that creates a truncated
translog for a given folder.
Here's what running the tool looks like:
```
λ bin/elasticsearch-translog truncate -d data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/
Checking existing translog files
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! WARNING: Elasticsearch MUST be stopped before running this tool !
! !
! WARNING: Documents inside of translog files will be lost !
! !
! WARNING: The following files will be DELETED! !
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-10.tlog
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-18.tlog
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-21.tlog
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-12.ckp
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-25.ckp
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-29.tlog
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-2.tlog
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-5.tlog
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-41.ckp
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-6.ckp
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-37.ckp
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-24.ckp
--> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-11.ckp
Continue and DELETE files? [y/N] y
Reading translog UUID information from Lucene commit from shard at [data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/index]
Translog Generation: 3
Translog UUID : AxqC4rocTC6e0fwsljAh-Q
Removing existing translog files
Creating new empty checkpoint at [data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog.ckp]
Creating new empty translog at [data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-3.tlog]
Done.
```
It also includes a `-b` batch operation that can be used to skip the
confirmation diaglog.
Resolves#19123
This change adds a new special path to the buckets_path syntax
`_bucket_count`. This new option will return the number of buckets for a
multi-bucket aggregation, which can then be used in pipeline
aggregations.
Closes#19553
When the request body is missing, all documents in the target index are counted.
As mentioned in #19422, the same should happen when the request body is an empty
json object. This is also the behaviour for the `_search` endpoint and the two
APIs should behave in the same way.
Before the aggregation tree was traversed to figure out what the parent level is, this commit
changes that by using `NestedScope` to figure out the nested depth level. The big upsides
are that this cleans up `NestedAggregator` (it used a hack to lazily figure out the nested parent filter)
and this is also what `nested` query uses and therefor the `nested` query can be included inside `nested`
aggregation and work correctly.
Closes#11749Closes#12410
This adds a header that looks like `Location: /test/test/1` to the
response for the index/create/update API. The requirement for the header
comes from https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.htmlhttps://tools.ietf.org/html/rfc7231#section-7.1.2 claims that relative
URIs are OK. So we use an absolute path which should resolve to the
appropriate location.
Closes#19079
This makes large changes to our rest test infrastructure, allowing us
to write junit tests that test a running cluster via the rest client.
It does this by splitting ESRestTestCase into two classes:
* ESRestTestCase is the superclass of all tests that use the rest client
to interact with a running cluster.
* ESClientYamlSuiteTestCase is the superclass of all tests that use the
rest client to run the yaml tests. These tests are shared across all
official clients, thus the `ClientYamlSuite` part of the name.
This adds new circuit breaking with the "request" breaker, which adds
circuit breaks based on the number of buckets created during
aggregations. It consists of incrementing during AggregatorBase creation
This also bumps the REQUEST breaker to 60% of the JVM heap now.
The output when circuit breaking an aggregation looks like:
```json
{
"shard" : 0,
"index" : "i",
"node" : "a5AvjUn_TKeTNYl0FyBW2g",
"reason" : {
"type" : "exception",
"reason" : "java.util.concurrent.ExecutionException: QueryPhaseExecutionException[Query Failed [Failed to execute main query]]; nested: CircuitBreakingException[[request] Data too large, data for [<agg [otherthings]>] would be larger than limit of [104857600/100mb]];",
"caused_by" : {
"type" : "execution_exception",
"reason" : "QueryPhaseExecutionException[Query Failed [Failed to execute main query]]; nested: CircuitBreakingException[[request] Data too large, data for [<agg [myagg]>] would be larger than limit of [104857600/100mb]];",
"caused_by" : {
"type" : "circuit_breaking_exception",
"reason" : "[request] Data too large, data for [<agg [otherthings]>] would be larger than limit of [104857600/100mb]",
"bytes_wanted" : 104860781,
"bytes_limit" : 104857600
}
}
}
}
```
Relates to #14046
Messy tests with mustache were either moved to core, moved to a rest test or remained untouched if they actually tested mustache.
Also removed tests that were redundant.
After #13834 many tests that used Groovy scripts (for good or bad reason) in their tests have been moved in the lang-groovy module and the issue #13837 has been created to track these messy tests in order to clean them up.
This commit moves more tests back in core, removes the dependency on Groovy, changes the scripts in order to use the mocked script engine, and change the tests to integration tests.
This change renames the package org.elasticsearch.search.fetch.fielddata in org.elasticsearch.search.fetch.docvalues and renames the
FieldData* classes in DocValue*. This is a follow up of the renaming that happened in #18943
Azure and Google cloud blob containers, as the APIs for both return
a 404 in the case of a missing object, which we already handle through
a NoSuchFileFoundException.
With #19140 we started persisting the node ID across node restarts. Now that we have a "stable" anchor, we can use it to generate a stable default node name and make it easier to track nodes over a restarts. Sadly, this means we will not have those random fun Marvel characters but we feel this is the right tradeoff.
On the implementation side, this requires a bit of juggling because we now need to read the node id from disk before we can log as the node node is part of each log message. The PR move the initialization of NodeEnvironment as high up in the starting sequence as possible, with only one logging message before it to indicate we are initializing. Things look now like this:
```
[2016-07-15 19:38:39,742][INFO ][node ] [_unset_] initializing ...
[2016-07-15 19:38:39,826][INFO ][node ] [aAmiW40] node name set to [aAmiW40] by default. set the [node.name] settings to change it
[2016-07-15 19:38:39,829][INFO ][env ] [aAmiW40] using [1] data paths, mounts [[ /(/dev/disk1)]], net usable_space [5.5gb], net total_space [232.6gb], spins? [unknown], types [hfs]
[2016-07-15 19:38:39,830][INFO ][env ] [aAmiW40] heap size [1.9gb], compressed ordinary object pointers [true]
[2016-07-15 19:38:39,837][INFO ][node ] [aAmiW40] version[5.0.0-alpha5-SNAPSHOT], pid[46048], build[473d3c0/2016-07-15T17:38:06.771Z], OS[Mac OS X/10.11.5/x86_64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_51/25.51-b03]
[2016-07-15 19:38:40,980][INFO ][plugins ] [aAmiW40] modules [percolator, lang-mustache, lang-painless, reindex, aggs-matrix-stats, lang-expression, ingest-common, lang-groovy, transport-netty], plugins []
[2016-07-15 19:38:43,218][INFO ][node ] [aAmiW40] initialized
```
Needless to say, settings `node.name` explicitly still works as before.
The commit also contains some clean ups to the relationship between Environment, Settings and Plugins. The previous code suggested the path related settings could be changed after the initial Environment was changed. This did not have any effect as the security manager already locked things down.
Makes deleting snapshots more robust by first deleting the
snapshot from the index generational file, then handling
individual deletion file errors with log messages instead of
failing the entire operation.
Added BlobContainer tests for HDFS storage
and caught a bug at the same time in which
deleteBlob was not raising an IOException
when the blobName did not exist.
Previously when trying to listen on virtual interfaces during
bootstrap the application would stop working - the interface
couldn't be found by the NetworkUtils class.
The NetworkUtils utilize the underlying JDK NetworkInterface
class which, when asked to lookup by name only takes physical
interfaces into account, failing at virtual (or subinterfaces)
ones (returning null).
Note that when interating over all interfaces, both physical and
virtual ones are taken into account.
This changeset asks for all known interfaces, iterates over them
and matches on the given name as part of the loop, allowing it
to catch both physical and virtual interfaces.
As a result, elasticsearch can now also serve on virtual
interfaces.
A test case has been added which at least makes sure that all
iterable interfaces can be found by their respective name. (It's
not easily possible in a unit test to "fake" virtual interfaces).
Relates #19537
Fixes CORS handling so that it uses the defaults for http.cors.allow-methods
and http.cors.allow-headers if none are specified in the config.
Closes#19520
#19096 introduced a generic TCPTransport base class so we can have multiple TCP based transport implementation. These implementations can vary in how they respond internally to situations where we concurrently send, receive and handle disconnects and can have different exceptions. However, disconnects are important events for the rest of the code base and should be distinguished from other errors (for example, it signals TransportMasterAction that it needs to retry and wait for the a (new) master to come back). Therefore, we should make sure that all the implementations do the proper translation from their internal exceptions into ConnectTransportException which is used externally.
Similarly we should make sure that the transport implementation properly recognize errors that were caused by a disconnect as such and deal with them correctly. This was, for example, the source of a build failure at https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+master+multijob-intake/1080 , where a concurrency issue cause SocketException to bubble out of MockTcpTransport.
This PR adds a tests which concurrently simulates connects, disconnects, sending and receiving and makes sure the above holds. It also fixes anything (not much!) that was found it.
creation timeout so they process the index creation cluster state update
before the test finishes and attempts to cleanup. Otherwise, the index
creation cluster state update could be processed after the test finishes
and cleans up, thereby leaking an index in the cluster state that could
cause issues for other tests that wouldn't expect the index to exist.
Closes#19530
In the lack of tests the analyzer.alias feature was pretty much not working
at all on current master. Issues like #19163 showed some serious problems for users
using this feature upgrading to an alpha version.
This change fixes the processing order and allows aliases to be set for
existing analyzers like `default`. This change also ensures that if `default`
is aliased the correct analyzer is used for `default_search` etc.
Closes#19163
Add parser for anonymous char_filters/tokenizer/token_filters
Using Settings in AnalyzeRequest for anonymous definition
Add breaking changes document
Closed#8878
* rethrow script compilation exceptions into ingest configuration exceptions
* update readProcessor to rethrow any exception as an ElasticsearchException
Remove `ParseField` constants used for names where there are no deprecated
names and just use the `String` version of the registration method instead.
This is step 2 in cleaning up the plugin interface for extending
search time actions. Aggregations are next.
This is breaking for plugins because those that register a new query should
now implement `SearchPlugin` rather than `onModule(SearchModule)`.
When index creation is not acknowledged (due to a very low request timeout) it is possible that the index is still created.
If a subsequent index-exists request completes before the cluster state of the index creation has been fully applied, it
might miss the newly created index.
* Remove outdated aggregation registration method
* Remove AggregationStreams
* Adds StreamInput#readNamedWriteableList and
StreamOutput#writeNamedWriteableList convenience methods. We strive to
make the reading and writing from the streams terse so they are easier
to scan visually.
* Remove PipelineAggregatorStreams
* Remove stream info from InternalAggreation.Type
* Remove InternalAggregation#type
* Remove Streamable from PipelineAggregator
* Remove Streamable from MultiBucketsAggregation.Bucket
During query refactoring the query string query parameter
'auto_generate_phrase_queries' was accidentally renamed
to 'auto_generated_phrase_queries'.
With this commit we restore the old name.
Closes#19512
Currently if a string field is not_analyzed, but a
position_increment_gap is set, it will lookup the default analyzer and
set it, along with the position_increment_gap, before the code which
handles setting the keyword analyzer for not_analyzed fields has a
chance to run. This change adds a parsing check and test for that case.
ThreadPool#schedule can throw a rejected execution exception. Yet, the
rejected execution exception that it throws comes from the EsAbortPolicy
which throws an EsRejectedExecutionException. This exception does not
inherit from RejectedExecutionException so instead we must catch the
former instead of the latter.
A self-rescheduling runnable can hit a rejected execution exception but
this exception goes uncaught. Instead, this exception should be caught
and passed to the onRejected handler. Not catching handling this
rejected execution exception can lead to test failures. Namely, a race
condition can arise between the shutting down of the thread pool and
cancelling of the rescheduling of the task. If another reschedule fires
right as the thread pool is being terminated, the rescheduled task will
be rejected leading to an uncaught exception which will cause a test
failure. This commit addresses these issues.
Relates #19505
The ThreadPool#scheduleWithFixedDelay method does not make it clear that all scheduled runnable instances
will be run on the scheduler thread. This becomes problematic if the actions being performed include
blocking operations since there is a single thread and tasks may not get executed due to a blocking task.
This change includes a few different aspects around trying to prevent this situation. The first is that
the scheduleWithFixedDelay method now requires the name of the executor that should be used to execute
the runnable. All existing calls were updated to use Names.SAME to preserve the existing behavior.
The second aspect is the removal of using ScheduledThreadPoolExecutor#scheduleWithFixedDelay in favor of
a custom runnable, ReschedulingRunnable. This runnable encapsulates the logic to deal with rescheduling a
runnable with a fixed delay and mimics the behavior of executing using a ScheduledThreadPoolExecutor and
provides a ScheduledFuture implementation that also mimics that of the typed returned by a
ScheduledThreadPoolExecutor.
Finally, an assertion was added to BaseFuture to detect blocking calls that are being made on the scheduler
thread.
When looking at the logstash template, I noticed that it has definitions for
dynamic temilates with `match_mapping_type` equal to `byte` for instance.
However elasticsearch never tries to find templates that match the byte type
(only long or double as far as numbers are concerned). This commit changes
template parsing in order to ignore bad values of `match_mapping_type` (given
how the logstash template is popular, this would break many upgrades
otherwise). Then I hope to fail the parsing on bad values in 6.0.
We used to mutate it as part of building the aggregation. That
caused assertVersionSerializable to fail because it assumes that
requests aren't mutated after they are sent.
Closes#19481
Primary relocation violates two invariants that ensure proper interaction between document replication and peer recoveries, ultimately leading to documents not being properly replicated.
Invariant 1: Document writes must be replicated based on the routing table of a cluster state that includes all shards which have ongoing or finished recoveries. This is ensured by the fact that do not start a recovery that is not reflected by the cluster state available on the primary node and we always sample a fresh cluster state before starting to replicate write operations.
Invariant 2: Every operation that is not part of the snapshot taken for phase 2, must be succesfully indexed on the target replica (pending shard level errors which will cause the target shard to be failed). To ensure this, we start replicating to the target shard as soon as the recovery start and open it's engine before we take the snapshot. All operations that are indexed after the snapshot was taken are guaranteed to arrive to the shard when it's ready to index them. Note that this also means that the replication doesn't fail a shard if it's not yet ready to recieve operations - it's a normal part of a recovering shard.
With primary relocations, the two invariants can be possibly violated. Let's consider a primary relocating while there is another replica shard recovering from the primary shard.
Invariant 1 can be violated if the target of the primary relocation is so lagging on cluster state processing that it doesn't even know about the new initializing replica. This is very rare in practice as replica recoveries take time to copy all the index files but it is a theoretical gap that surfaces in testing scenarios.
Invariant 2 can be violated even if the target primary knows about the initializing replica. This can happen if the target primary replicates an operation to the intializing shard and that operation arrives to the initializing shard before it opens it's engine but arrives to the primary source after it has taken the snapshot of the translog. Those operations will be currently missed on the new initializing replica.
The fix to reestablish invariant 1 is to ensure that the primary relocation target has a cluster state with all replica recoveries that were successfully started on primary relocation source. The fix to reestablish invariant 2 is to check after opening engine on the replica if the primary has been relocated in the meanwhile and fail the recovery.
Closes#19248
RecoveryTarget increments a reference on the store once it's
created. If we fail to return the instance from the reset method
we leak a reference causing shard locks to not be released. This
change creates the reference in the return statement to ensure no
references are leaked
An initializing replica shard might not have an UnassignedInfo object, for example when it is a relocation target. The method allocatedPostIndexCreate does not account for this situation.
Today when we reset a recovery because of the source not being
ready or the shard is getting removed on the source (for whatever reason)
we wipe all temp files and reset the recovery without respecting any
reference counting or locking etc. all streams are closed and files are
wiped. Yet, this is problematic since we assert that some files are on disk
etc. when we finish writing a file. These assertions don't hold anymore if we
concurrently wipe the tmp files.
This change moves the logic out of RecoveryTarget into RecoveriesCollection which
basically clones the RecoveryTarget on reset instead which allows in-flight operations
to finish gracefully. This means we now have a single path for cleanups in RecoveryTarget
and can safely use assertions in the class since files won't be removed unless the recovery
is either canceled, failed or finished.
Closes #19473
Today they don't because the create index request that is implicitly created
adds an empty mapping for the type of the document. So to Elasticsearch it
looks like this type was explicitly created and `index.mapper.dynamic` is not
checked.
Closes#17592
The test would previously catch Throwable and then decide if it was a critical exception or not. As the catch block was changed from Throwable to Exception this made the test fail for non-critical exceptions. This commit changes the test so that exceptions are only thrown when they're unexpected.
This is the first pipeline aggregation that doesn't have its own
bucket type that needs serializing. It uses InternalHistogram instead.
So that required reworking the new-style `registerAggregation` method
to not require bucket readers. So I built `PipelineAggregationSpec` to
mirror `AggregationSpec`. It allows registering any number of bucket
readers or result readers.
The `client/transport` project adds a new jar build project that
pulls in all dependencies and configures all required modules.
Preinstalled modules are:
* transport-netty
* lang-mustache
* reindex
* percolator
The `TransportClient` classes are still in core
while `TransportClient.Builder` has only a protected construcutor
such that users are redirected to use the new `TransportClientBuilder`
from the new jar.
Closes#19412