Commit Graph

5519 Commits

Author SHA1 Message Date
Clinton Gormley 9f59a85c26 Removed .settings from .gitignore - @jpountz intended
for it to be included to ensure consistent code styles
between eclipse and IDEA. Instead just ignore
the org.eclipse.m2e preferences
2013-09-13 09:29:06 +02:00
Igor Motov 714aaa40ea CompletionFieldMapper should use index name instead of full name
Fixes #3669
2013-09-12 17:39:03 -04:00
Simon Willnauer 507b6a6e8c Fix several places where resources are not closed
This commit adds a test that checks that we are closing
all files even if there are random IOExceptions happening. The test
found several places where due to IOExceptions streams were not
closed due to unsufficient exception handling.
2013-09-12 23:24:27 +02:00
Luca Cavanna 439413c626 Introduced common test methods in MatchedQueriesTests (e.g. createIndex, ensureGreen, refresh, assertHitCount) 2013-09-12 21:03:33 +02:00
Luca Cavanna 3a5b325b23 Added specific log level to check which shards were refreshed and which shards we searched on 2013-09-12 19:53:59 +02:00
Clinton Gormley d6ecdecc19 [DOCS] Deprecated the from/to/include_lower/include_upper params
in the range query, range filter and numeric range filter.
Better to use gt/gte/lt/lte as they are explicit.
2013-09-12 15:07:36 +02:00
David Pilato 169cd007b5 Fix typo
Thanks to @ybonnel for finding it ;-)
2013-09-12 11:00:59 +02:00
Martijn van Groningen 8ddb809f98 If all scroll ids should be removed then the `_all` value should be used instead of not specifying any scroll ids. 2013-09-12 10:41:38 +02:00
Martijn van Groningen 1fe72dcf56 Moved to use assertHitCount instead of just checking the total count 2013-09-12 10:25:42 +02:00
Martijn van Groningen d3aacba6c6 Set log level to trace for package action.support.broadcast in the RecoveryPercolatorTests#testSinglePercolator_recovery test. 2013-09-12 10:19:10 +02:00
Simon Willnauer fddb7420ae Add support for Lucene's MockDirectoryWrapper
MockDirectoryWrapper adds asserting logic to the low level directory
implementation that helps to track and catch resource leaks like
unclosed index inputs caused by dangling IndexReader or IndexSearcher
instances. It prevents double writes to files and allows low level
random exceptions to be thrown for testing index consistency etc.

Closes #3654
2013-09-11 22:47:34 +02:00
Simon Willnauer 0e936c99e3 Add -Dtests.integration support to pom.xml 2013-09-11 22:16:43 +02:00
Simon Willnauer c2675bac50 Added 'IntegrationTests' annotation to filter integration tests
Tests now support ignoring integartion tests by passing
'-Dtests.integration=false' to the test runner either via IDE or
maven to only run fast unittests.
2013-09-11 22:12:38 +02:00
Luca Cavanna ff54cba807 fixed typo in javadoc 2013-09-11 21:00:53 +02:00
Andrew Raines 6c9542d967 Fix misspellings. 2013-09-11 13:49:33 -05:00
Andrew Raines 7425b85b0b Collapse test.{unit,integration} into org.elasticsearch. 2013-09-11 12:49:49 -05:00
David Pilato b5c9807ba2 Reverting change in bin/plugin due to d3980ee184f11efcbd2b38421f4946de9198fe20 commit 2013-09-11 16:34:08 +02:00
Shay Banon bbce6e8588 Rare race condition when introducing new fields into a mapping
Dynamic mapping allow to dynamically introduce new fields into an existing mapping. There is a (pretty rare) race condition, where a new field/object being introduced will not be immediately visible for another document that introduces it at the same time.

closes #3667, closes #3544
2013-09-11 07:08:57 -07:00
Luca Cavanna 012797a82c Loggers#getLogger static method to take into account the logger prefix
Improved LoggingListener (which reads and applies @TestLogging annotation) to take into account the logger prefix. We can now use the @TestLogging annotation and specify either the whole package name (e.g. o.e.action.metadata) or only the package name without the logger prefix (action.metadata), the custom log level will be properly applied in both cases
2013-09-11 16:04:08 +02:00
Luca Cavanna daedf853a0 Removed logger prefixes when using @TestLogging annotation 2013-09-11 11:24:07 +02:00
Martijn van Groningen 1d8457394d Added more trace logging.
Enabled trace logging for RecoveryPercolatorTests#testMultiPercolator_recovery test
Cleaned up and removed unnecessary usage of concurrent collections.
2013-09-11 11:20:08 +02:00
Luca Cavanna 0b79ba9493 Rewrote SuggestResponse#toString method
It only prints out json now (as the SearchResponse does)
Added missing startObject & endObject (was causing JsonGenerationException)

Closes #3661
2013-09-10 22:33:51 +02:00
Martijn van Groningen 0efa78710b Added clear scroll api.
The clear scroll api allows clear all resources associated with a `scroll_id` by deleting the `scroll_id` and its associated SearchContext.

Closes #3657
2013-09-10 21:17:34 +02:00
David Pilato fafc4eef98 Plugin Manager: add silent mode.
Now with have proper exit codes for elasticsearch plugin manager (see #3463), we can add a silent mode to plugin manager.

```sh
bin/plugin --install karmi/elasticsearch-paramedic --silent
```

Closes #3628.
2013-09-10 18:31:35 +02:00
Britta Weber a91653cf1a change logging level of testTimeoutSendExceptionWithDelayedResponse to TRACE 2013-09-10 17:19:26 +02:00
Luca Cavanna 4032f1eb44 Configured TRACE logging to RecoveryWhileUnderLoadTests#recoverWhileUnderLoadAllocateBackupsTest
We want to log which shards we are searching on and which shards we are refreshing
2013-09-10 14:28:59 +02:00
Britta Weber 7c20603071 use ElasticsearchAssertions to get info on shard failures 2013-09-10 12:13:37 +02:00
Simon Willnauer 5c00dc5773 Rename IndexShard#searcher() to #acquireSearcher()
Based on recent bugs ( #3652 ) where searchers were acquired multiple times
but never released 'IndexShard#searcher()' has not a more accurate name.

Closes #3653
2013-09-09 21:21:27 +02:00
Simon Willnauer 777d7f47a5 Catch ESRejectedExecutionException on node close
When a node shuts down thread pools might throw
ESRejectedExecutionException and our test framework fails tests if
exceptions are not caught hitting uncaught exception handler.
2013-09-09 21:21:27 +02:00
David Pilato 764aa54f2d Plugin Manager should support -remove group/artifact/version naming
When installing a plugin, we use:

```sh
bin/plugin --install groupid/artifactid/version
```

But when removing the plugin, we only support:

```sh
bin/plugin --remove dirname
```

where `dirname` is the directory name of the plugin under `/plugins` dir.

Closes #3421.
2013-09-09 21:17:16 +02:00
Brad Fritz f3c0e39380 key is "index.store.type", not "index.storage.type" 2013-09-09 13:06:09 -04:00
Simon Willnauer 76cc8c3549 Only pull searcher once during completion stats
The inner call to the completion stats pulled a second searcher
that never got released causing the underlying readers to never
get closed unless the node is shut down. This was triggered
with literally each stats call including the completion stats
even if no completion service was used on the index.

Closes #3652
2013-09-09 17:50:41 +02:00
Lee Hinman 7d52d58747 Add AllocationDecider that takes free disk space into account
This commit adds two main pieces, the first is a ClusterInfoService
that provides a service running on the master nodes that fetches the
total/free bytes for each data node in the cluster as well as the
sizes of all shards in the cluster. This information is gathered by
default every 30 seconds, and can be changed dynamically by setting
the `cluster.info.update.interval` setting. This ClusterInfoService
can hopefully be used in the future to weight nodes for allocation
based on their disk usage, if desired.

The second main piece is the DiskThresholdDecider, which can disallow
a shard from being allocated to a node, or from remaining on the node
depending on configuration parameters. There are three main
configuration parameters for the DiskThresholdDecider:

`cluster.routing.allocation.disk.threshold_enabled` controls whether
the decider is enabled. It defaults to false (disabled). Note that the
decider is also disabled for clusters with only a single data node.

`cluster.routing.allocation.disk.watermark.low` controls the low
watermark for disk usage. It defaults to 0.70, meaning ES will not
allocate new shards to nodes once they have more than 70% disk
used. It can also be set to an absolute byte value (like 500mb) to
prevent ES from allocating shards if less than the configured amount
of space is available.

`cluster.routing.allocation.disk.watermark.high` controls the high
watermark. It defaults to 0.85, meaning ES will attempt to relocate
shards to another node if the node disk usage rises above 85%. It can
also be set to an absolute byte value (similar to the low watermark)
to relocate shards once less than the configured amount of space is
available on the node.

Closes #3480
2013-09-09 09:49:30 -06:00
Luca Cavanna 563111f0f9 Fixed typo: renamed test wamer package to warmer 2013-09-09 14:24:21 +02:00
Luca Cavanna 8ad583b35e Fixed typo: renamed test.listerners package to test.listener 2013-09-09 14:24:21 +02:00
Luca Cavanna 5fdae8ba10 Restored log lines to TRACE for notification post alias creation and open/close index
Introduced specific log level for those in OpenCloseIndexTests#testCloseOpenAliasMultipleIndices
2013-09-09 14:24:21 +02:00
Luca Cavanna 9e72683ba4 Added @TestLogging annotation to set a specific level per test method
It supports multiple logger:level comma separated key value pairs
 Use the _root keyword to set the root logger level
 e.g. @Logging("_root:DEBUG,org.elasticsearch.cluster.metadata:TRACE")
 or just @TestLogging("_root:DEBUG,cluster.metadata:TRACE") since we start the test with -Des.logger.prefix=
2013-09-09 14:24:21 +02:00
Simon Willnauer 453e7c1510 Use same index name for indexing that is used for creating the index 2013-09-09 13:01:53 +02:00
Simon Willnauer 732e38b8c7 Throw IAE if reserved completion suggester chars are used in input
The completion suggester reserves 0x00 and 0xFF as for internal use.
If those chars are used in the input string an IAE is thrown and the
input is rejected.

Closes #3648
2013-09-09 11:57:01 +02:00
Simon Willnauer d7b3ed7e8b Only use one document to test empty shards 2013-09-09 10:53:53 +02:00
Simon Willnauer a5bf8824d2 Remove createMapped and mapping helpers from AbstractSharedClusterTest
our tests should use the API we have in favor of hard to read / understand
test helpers.
2013-09-09 09:29:22 +02:00
Nik Everett da4c58d853 Beautify SuggestSearchTests.
SuggestSearchTests had tons of duplicate code and didn't use all of the
fancy new integration test helper method.  I've removed a ton of duplicate
code and used as many of the nice test helper method I could think of.

Closes #3611
2013-09-09 09:16:47 +02:00
Shay Banon 3c5dd43928 fix logging level 2013-09-08 00:32:44 +02:00
Martijn van Groningen 86d69bb41c field rename 2013-09-06 21:35:53 +02:00
Martijn van Groningen b01dececac Removed matched_filters in favor for matched_queries.
Closes #3644
2013-09-06 21:34:08 +02:00
Martijn van Groningen 29ea6b3071 Deprecate the method names `namedFilter` in favor of `namedQueries`.
The reason behind this is that the `_name` support has been extended to queries as well and the name `namedQueries` suggests better that it is applicable to the whole query dsl.

Relates to #3581
2013-09-06 21:15:22 +02:00
Shay Banon 2767c081cd Allow to control the number of processors sizes are based on
Sometimes, one wants to just control the number of processors our different size base calculations for thread pools and network workers are based on, and not use the reported available processor by the OS. Add processors setting, where it can be controlled.

closes #3643
2013-09-06 19:04:15 +02:00
Adrien Grand c9a7bb26ba Remember to think about filter caching when upgrading to Lucene 4.5. 2013-09-06 18:23:16 +02:00
Simon Willnauer 8203d4dbcf fix potential blocking of NettyTransport connect and disconnect methods
Currently, in NettyTransport the locks for connecting and disconnecting channels are stored in a ConcurrentMap that has 500 entries. A tread can acquire a lock for an id and the lock returned is chosen on a hash function computed from the id.
Unfortunately, a collision of two ids can cause a deadlock as follows:

Scenario: one master (no data), one datanode (only data node)

DiscoveryNode id of master is X
DiscoveryNode id of datanode is Y

Both X and Y cause the same lock to be returned by NettyTransport#connectLock()

Both are up and running, all is fine until master stops.

Thread 1: The master fault detection of the datanode is notified (onNodeDisconnected()), which in turn leads the node to try and reconnect to master via the callstack titled "Thread 1" below.

-> connectToNode() is called and lock for X is acquired. The method waits for 45s for the cannels to reconnect.

Furthermore, Thread 1 holds the NettyTransport#masterNodeMutex.

Thread 2: The connection fails with an exception (connection refused, see callstack below), because the master shut down already. The exception is handled in NettyTransport#exceptionCaught which calls NettyTransport#disconnectFromNodeChannel. This method acquires the lock for Y (see Thread 2 below).

Now, if Y and X have two different locks, this would get the lock, disconnect the channels and notify thread 1. But since X and Y have the same locks, thread 2 is deadlocked with thread 1 which waits for 45s.

In this time, no thread can acquire the masterNodeMutex (held by thread 1), so the node can, for example, not stop.

This commit introduces a mechanism that assures unique locks for unique ids. This lock is not reentrant and therfore assures that threads can not end up in an infinite recursion (see Thread 3 below for an example on how a thread can aquire a lock twice).
While this is not a problem right now, it is potentially dangerous to have it that way, because the callstacks are complex as is and slight changes might cause unecpected recursions.

Thread 1
----

	owns: Object  (id=114)
	owns: Object  (id=118)
	waiting for: DefaultChannelFuture  (id=140)
	Object.wait(long) line: not available [native method]
	DefaultChannelFuture(Object).wait(long, int) line: 461
	DefaultChannelFuture.await0(long, boolean) line: 311
	DefaultChannelFuture.awaitUninterruptibly(long) line: 285
	NettyTransport.connectToChannels(NettyTransport$NodeChannels, DiscoveryNode) line: 672
	NettyTransport.connectToNode(DiscoveryNode, boolean) line: 609
	NettyTransport.connectToNode(DiscoveryNode) line: 579
	TransportService.connectToNode(DiscoveryNode) line: 129
	MasterFaultDetection.handleTransportDisconnect(DiscoveryNode) line: 195
	MasterFaultDetection.access$0(MasterFaultDetection, DiscoveryNode) line: 188
	MasterFaultDetection$FDConnectionListener.onNodeDisconnected(DiscoveryNode) line: 245
	TransportService$Adapter$2.run() line: 298
	EsThreadPoolExecutor(ThreadPoolExecutor).runWorker(ThreadPoolExecutor$Worker) line: 1145
	ThreadPoolExecutor$Worker.run() line: 615
	Thread.run() line: 724

Thread 2
-------

	waiting for: Object  (id=114)
	NettyTransport.disconnectFromNodeChannel(Channel, Throwable) line: 790
	NettyTransport.exceptionCaught(ChannelHandlerContext, ExceptionEvent) line: 495
	MessageChannelHandler.exceptionCaught(ChannelHandlerContext, ExceptionEvent) line: 228
	MessageChannelHandler(SimpleChannelUpstreamHandler).handleUpstream(ChannelHandlerContext, ChannelEvent) line: 112
	DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline$DefaultChannelHandlerContext, ChannelEvent) line: 564
	DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(ChannelEvent) line: 791
	SizeHeaderFrameDecoder(FrameDecoder).exceptionCaught(ChannelHandlerContext, ExceptionEvent) line: 377
	SizeHeaderFrameDecoder(SimpleChannelUpstreamHandler).handleUpstream(ChannelHandlerContext, ChannelEvent) line: 112
	DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline$DefaultChannelHandlerContext, ChannelEvent) line: 564
	DefaultChannelPipeline.sendUpstream(ChannelEvent) line: 559
	Channels.fireExceptionCaught(Channel, Throwable) line: 525
	NioClientBoss.processSelectedKeys(Set<SelectionKey>) line: 110
	NioClientBoss.process(Selector) line: 79
	NioClientBoss(AbstractNioSelector).run() line: 312
	NioClientBoss.run() line: 42
	ThreadRenamingRunnable.run() line: 108
	DeadLockProofWorker$1.run() line: 42
	ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker) line: 1145
	ThreadPoolExecutor$Worker.run() line: 615
	Thread.run() line: 724

Thread 3
---------

        org.elasticsearch.transport.netty.NettyTransport.disconnectFromNode(NettyTransport.java:772)
        org.elasticsearch.transport.netty.NettyTransport.access$1200(NettyTransport.java:92)
        org.elasticsearch.transport.netty.NettyTransport$ChannelCloseListener.operationComplete(NettyTransport.java:830)
        org.jboss.netty.channel.DefaultChannelFuture.notifyListener(DefaultChannelFuture.java:427)
        org.jboss.netty.channel.DefaultChannelFuture.notifyListeners(DefaultChannelFuture.java:418)
        org.jboss.netty.channel.DefaultChannelFuture.setSuccess(DefaultChannelFuture.java:362)
        org.jboss.netty.channel.AbstractChannel$ChannelCloseFuture.setClosed(AbstractChannel.java:355)
        org.jboss.netty.channel.AbstractChannel.setClosed(AbstractChannel.java:185)
        org.jboss.netty.channel.socket.nio.AbstractNioChannel.setClosed(AbstractNioChannel.java:197)
        org.jboss.netty.channel.socket.nio.NioSocketChannel.setClosed(NioSocketChannel.java:84)
        org.jboss.netty.channel.socket.nio.AbstractNioWorker.close(AbstractNioWorker.java:357)
        org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:58)
        org.jboss.netty.channel.Channels.close(Channels.java:812)
        org.jboss.netty.channel.AbstractChannel.close(AbstractChannel.java:197)
        org.elasticsearch.transport.netty.NettyTransport$NodeChannels.closeChannelsAndWait(NettyTransport.java:892)
        org.elasticsearch.transport.netty.NettyTransport$NodeChannels.close(NettyTransport.java:879)
        org.elasticsearch.transport.netty.NettyTransport.disconnectFromNode(NettyTransport.java:778)
        org.elasticsearch.transport.netty.NettyTransport.access$1200(NettyTransport.java:92)
        org.elasticsearch.transport.netty.NettyTransport$ChannelCloseListener.operationComplete(NettyTransport.java:830)
        org.jboss.netty.channel.DefaultChannelFuture.notifyListener(DefaultChannelFuture.java:427)
        org.jboss.netty.channel.DefaultChannelFuture.notifyListeners(DefaultChannelFuture.java:418)
        org.jboss.netty.channel.DefaultChannelFuture.setSuccess(DefaultChannelFuture.java:362)
        org.jboss.netty.channel.AbstractChannel$ChannelCloseFuture.setClosed(AbstractChannel.java:355)
        org.jboss.netty.channel.AbstractChannel.setClosed(AbstractChannel.java:185)
        org.jboss.netty.channel.socket.nio.AbstractNioChannel.setClosed(AbstractNioChannel.java:197)
        org.jboss.netty.channel.socket.nio.NioSocketChannel.setClosed(NioSocketChannel.java:84)
        org.jboss.netty.channel.socket.nio.AbstractNioWorker.close(AbstractNioWorker.java:357)
        org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:93)
        org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
        org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
        org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
        org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
        java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        java.lang.Thread.run(Thread.java:724)
2013-09-06 17:56:18 +02:00
Shay Banon 4155741f7f BytesStreamOutput default size should be 2k instead of 32k
We changed the default of BytesStreamOutput (used in various places in ES) to 32k from 1k with the assumption that most stream tend to be large. This doesn't hold for example when indexing small documents and adding them using XContentBuilder (which will have a large overhead).

Default the buffer size to 2k now, but be relatively aggressive in expanding the buffer when below 256k (double it), and just use oversize (1/8th) when larger to try and minimize garbage and buffer copies.

relates to #3624
closes #3638
2013-09-06 02:25:17 +02:00