These tests rely on the fact that all files stay the same after
the corruption and if we run into a translog based flush we might
use a new / different delete file causing the test to fail.
Today, due to how netty works (both on http layer and transport layer), and even though the buffers sent over to netty are paged (CompositeChannelBuffer), it ends up re-copying the whole buffer into another heap buffer (bad), and then send it over directly to sun.nio which allocates a full thread local direct buffer to send it (which can be repeated if not all message is sent).
This is problematic for very large messages, aside from the extra heap temporal usage, the large direct buffers will stay around and not released by the JVM.
This change forces the use of gathering when building a CompositeChannelBuffer, which results in netty using the sun.nio write method that accepts an array of ByteBuffer (so no extra heap copying), and also reduces the amount of direct memory allocated for large messages.
See the doc on NettyUtils#DEFAULT_GATHERING for more info.
closes#7811
Today we rely on the metadata length of the file we are recoverying
to indicate when the last chunk was received. Yet, this might hide bugs
on the compression layer if payloads are truncated. We should indicate
if the last chunk is send to make sure we validate checksums
accordingly if possible.
Closes#7830
From the version 1.0 FilterBuilders and QueryBuilders are not part from org.elasticsearch.index.query.xcontent package no more.
Closes#7701.
(cherry picked from commit 32d4200)
Before phase2 we check verify that the local mapping is in sync with the cluster state mapping (and send & wait on a master update mapping task if not). This check should be done under a cluster state update task to make sure an incoming cluster state update to do not change things while we check.
Closes#7744
During discovery a node gossips with other nodes to discover the current state of the cluster - what nodes are out there, what version they use and most importantly whether there is an active master out there. During this ping process we may end up in a situation where old information is mixed with new. This is comment if a couple of master election happen in rapid succession.
This commit adds a monotonically increasing id to each ping response. This makes it easy to always select the last ping from every node.
Closes#7769
This contains several cleanups to the indexed scripts.
Remove the unused FetchSourceContext from the Get request..
Add lang,_version,_id to the REST GET API.
Removes the routing from GetIndexedScriptRequest since the script index is a single shard that is replicated across all nodes.
Fix backward compatible template file reference
Before 1.3.0 on disk scripts could be referenced by requesting
````
_search/template
{
"template" : "ondiskscript"
}
````
This was broken in 1.3.0 by requiring
````
{
"template" :
{
"file" : "ondiskscript"
}
}
````
This commit restores the previous behavior.
Remove support for preference, realtime and refresh
These parameters don't make sense anymore for indexed scripts as we always force the preference to _local and
always refresh after a Put to the indexed scripts index.
Closes#7568Closes#7559Closes#7647Closes#7567
When using the DiskThresholdDecider, it's possible that shards could
already be marked as relocating to the node being evaluated. This commit
adds a new setting `cluster.routing.allocation.disk.include_relocations`
which adds the size of the shards currently being relocated to this node
to the node's used disk space.
This new option defaults to `true`, however it's possible to
over-estimate the usage for a node if the relocation is already
partially complete, for instance:
A node with a 10gb shard that's 45% of the way through a relocation
would add 10gb + (.45 * 10) = 14.5gb to the node's disk usage before
examining the watermarks to see if a new shard can be allocated.
Fixes#7753
Relates to #6168
The bulk API request was marked as completely failed,
in case a request with a closed index was referred in
any of the requests inside of a bulk one.
Implementation Note: Currently the implementation is a bit more verbose in order to prevent an instanceof check and another cast - if that is fast enough, we could execute that logic only once at the
beginning of the loop (thinking this might be a bit overoptimization here).
Closes#6410
The bulk API request was marked as completely failed,
in case a request with a closed index was referred in
any of the requests inside of a bulk one.
Implementation Note: Currently the implementation is a bit more verbose in order to prevent an instanceof check and another cast - if that is fast enough, we could execute that logic only once at the beginning of the loop (thinking this might be a bit overoptimization here).
Closes#6410
InternalEngine#refreshNeeded must increment the ref count on the
store used before it's checking if the searcher is current since
internally a searcher ref is acquired and if that happens concurrently
to a engine close it might violate the assumption that all files
are closed when the store is closed.
This commit also converts some try / finally into try / with.
We currently look for REST tests on file system although they are disabled. We should not do that and move the check earlier on. This way third parties using our test infra, which don't have REST tests on file system, can effectively disable the REST tests, otherwise they would get initialization error despite having disabled them. The downside is that the number of tests visualized is going to be zero instead of the real number of parsed REST tests, but there is nothing we can do about this. Tests get ignored anyways.
This change removes the backwards compatibility workaround that checks that a compoundOrder originated from a single user defined criteria for the purposes of serialising to older versioned nodes.
today we have to catch rejected operation exceptions in various places
and notify an ActionListener. This pattern is error prone and adds a lot
of boilerplait code. It's also easy to miss catching this exception
which only is relevant if nodes are under high load. This commit adds
infrastructure that makes ActionListener first class citizen on async
actions.
Closes#7765
We currently expose generic getters for indices and indicesOptions on the IndicesRequest interface. This commit adds a generic setter as well, which can be used to set the indices to a request. The setter impl throws `UnsupportedOperationException` if called on internal requests. Also throws exception if called on single index operations, since it accepts an array as argument.
Closes#7734
Update request internally executes index and delete operations. We need to make sure that those internal operations hold the same headers and context as the original update request. Achieved via copy constructors that accept the current request and the original request.
Closes#7766
Delete mapping executes flush, delete by query and refresh operations internally. Those internal requests are now initialized by passing in the original delete mapping request so that its headers and request context are kept around.
Closes#7736
Previously we resetted the test cluster for all subsequent tests
even though they didn't fail. This make suites like REST tests faster
and prevents crazy timeouts.
Closes#7775
Changes the name of the field in the scripted metrics aggregation from 'aggregation' to 'value' to be more in line with the other metrics aggregations like 'avg'
Master node related operations were missing proper handling of cluster blocks, allowing for example to perform cluster level update settings even before the state was fully restored on initial cluster startup
Note, the change allows to change read only related settings without checking for blocks on update settings, as without it, it means one can't re-enable metadata/write. Also, it doesn't check for blocks on cluster state and health API, as those are allowed to be used even when blocked to figure out what causes the block.
closes#7763closes#7740
upgradeOneNode() only checked if the new node is in the nodes info.
However, this does not guarantee that all nodes have joined the cluster
already. For example the new node could be the master and might not yet
know about all nodes and the other nodes might not know about the new
master yet.
Depending on which client is picked later, the client might then try to
send request to the old node that was shut down instead of the new one.
Instead of just checking if the new node is in the nodes info we should
therefore also check if the all nodes are in the nodes info.