The recovery API was sometimes misreporting the recovered byte
percentages of index files. This was caused by summing up total file
lengths on each file chunk transfer. It should have been summing the
lengths of each transfer request.
Closes#6113
NettyTransport's ChannelPipelineFactory uses the instance variable
serverOpenChannels in order to create sockets. However, this instance variable
is set to null when stoping the netty transport, so if the transport tries to
stop and to initialize a socket at the same time you might hit the following
NullPointerException:
[2014-05-13 07:33:47,616][WARN ][netty.channel.socket.nio.AbstractNioSelector] Failed to initialize an accepted socket.
java.lang.NullPointerException: handler
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.<init>(DefaultChannelPipeline.java:725)
at org.jboss.netty.channel.DefaultChannelPipeline.init(DefaultChannelPipeline.java:667)
at org.jboss.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:96)
at org.elasticsearch.transport.netty.NettyTransport$2.getPipeline(NettyTransport.java:327)
at org.jboss.netty.channel.socket.nio.NioServerBoss.registerAcceptedChannel(NioServerBoss.java:134)
at org.jboss.netty.channel.socket.nio.NioServerBoss.process(NioServerBoss.java:104)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
This fix ensures that the ChannelPipelineFactory always uses the channels that
have been used upon start, even if a stop request is issued concurrently.
Close#6144
Our improvements to t-digest have been pushed upstream and t-digest also got
some additional nice improvements around memory usage and speedups of quantile
estimation. So it makes sense to use it as a dependency now.
This also allows to remove the test dependency on Apache Mahout.
Close#6142
It's dangerous to expose SerialMergeScheduler as an option: since it only allows one merge at a time, it can easily cause merging to fall behind.
Closes#6120
When a mapping is declared and the type is known from the uri
then the type can be skipped in the body (see #4483). However,
there was no check if the given keys actually make a valid mapping.
closes#5864closes#6093
Percentiles are supposed to be monotonically increasing but floating-point
rounding issues can come into play and make the test fail if checks are too
strict.
The validate API was failing to reject JSON input that had unsupported
fields placed after a supported field. This was causing invalid requests
to be reported as valid.
Fixes#5685
Made sure that a match_all query is used when no query is specified and ensure no NPE is thrown either.
Also used the same code path as the search api to ensure that alias filters are taken into account, same for type filters.
Closes#6111Closes#6112Closes#6116
More Like This API would not take into account 'size' and 'from' in request body parameters.
Instead these values would always be overriden by the default values of REST parameters
'search_size' and 'search_from'.
Closes#5981
- Fix bug where repeatedly calling computeSummaryStatistics() could
accumulate some values incorrectly
- Fix check for number of responsive nodes on list is <= number of
candidate benchmark nodes
- Add public getters for summary statistics
- Add javadoc for new getters
- Add javadoc comments about API use
- Improve abort and status tests by calling awaitBusy() to wait for jobs
to be completely submitted before testing them
In our REST tests we already have support for features and skip sections that allow to skip tests if a feature is not supported.
We can then add a skip section based on the benchmark feature to the benchmark tests and execute them only when they are supported, knowing that they need at least a node with node.bench settings within the cluster. We can check that this requirement is met by calling the nodes info api.
This way we can dynamically decide whether to execute those tests or not and we don't need to have a node.bench around all the time. In fact, given that the REST tests use the GLOBAL cluster, we want to be able to randomize settings as much as possible and run tests against default settings as well. Also, this mechanism can be easily supported by the external cluster implementation that is used during the release process.
Introduced ability to disable benchmark nodes which is needed by BenchmarkNegativeTest.