Commit Graph

7907 Commits

Author SHA1 Message Date
Simon Willnauer 26adb37f09 [TEST] Ignore bogus system properties.
LuceneTestCase might reset some solr properties that cause
our tests to fail if the run before in the same JVM We just ignore
solr properties.
2014-04-16 15:19:17 +02:00
Simon Willnauer 3530c8be7e [TEST] catch exceptions if TTL already expired when indexing
TTLPercolatorTests indexes docs with small TTLs which can trigger
AlreadyExpiredException exception. This is expected while rare and
we should just catch them.
2014-04-16 15:10:28 +02:00
Simon Willnauer be14968c44 Ensure close is called under lock in the case of an engine failure
Until today we did close the engine without aqcuireing the write lock
since most calls were still holding a read lock. This commit removes
the code that holds on to the readlock when failing the engine which
means we can simply call #close()
2014-04-16 14:50:40 +02:00
Boaz Leskes 099b9c6b06 add debug logs if failed shards can not be resolved. 2014-04-16 14:45:54 +02:00
Martijn van Groningen 840d1b4b8e [TEST] Reduce the amount of docs being indexed. 2014-04-16 15:49:24 +07:00
Martijn van Groningen 98deb5537f Better deal with invalid scroll ids.
Closes #5738
2014-04-16 14:13:29 +07:00
Simon Willnauer 8df5d4c37e [TEST] Fix PercolatorTests#testSimple2
This test requires a mapping since otherwise if there is no mapping
added the percolator query might not be parsed as a query on a numeric
field since the query might arrive on a node before the dynamic mapping
reached that node.

This commit also moves the `indexService.readAllowed()` call up before
the number of percolation queries is check to make sure we fail if reads
are not allowed - there might be a query in-flight which means we need
to check another node rather than return an empty result.
2014-04-15 23:01:35 +02:00
Lee Hinman 65e72a5be5 [TEST] Wait for green, and refresh after indexing in percolator test 2014-04-15 11:19:41 -06:00
Kouhei Sutou de59cde926 Remove garbage 2014-04-15 17:57:25 +02:00
Simon Willnauer c5c87c4a48 [TEST] Don't delete data dirs after test - only delete their content.
Closes #5815
2014-04-15 17:03:31 +02:00
Simon Willnauer 9898eed30c [DOCS] Update merge docs to reflect the max_merge_at_once property 2014-04-15 16:42:23 +02:00
Simon Willnauer 320a206352 Switch back to ConcurrentMergeScheduler
Load tests showed that SerialMS has problems to keep up with
the merges under high load. We should switch back to CMS
until we have a better story to balance merge
threads / efforts across shards on a single node.

Closes #5817
2014-04-15 16:42:23 +02:00
Adrien Grand 9920084ba2 [TEST] Wait for shards to be allocated before running testUpdateMappingDynamicallyWhilePercolating.
If the percolate request is executed soon enough, all shards fail and the
mapping is not actually updated.
2014-04-15 16:20:16 +02:00
Scott Wilkerson 9ea0e3a95b Update percolate.asciidoc
fix typo
2014-04-15 16:01:44 +02:00
eliasah c61110c28d Update core-types.asciidoc
Missing bracket
2014-04-15 15:57:04 +02:00
Yousef d7fda621e9 Updated date_formats to new dynamic_date_formats 2014-04-15 15:44:08 +02:00
Martijn van Groningen 202b1e2306 Update clusterstate if mapping service has local changes
If the during percolating a new field was introduced
in the local mapping service, then those changes should
be updated in cluster state of the master as well.

Closes #5776
2014-04-15 13:41:01 +02:00
Simon Willnauer 7c6d745523 Cleanup FileSystemUtils#mkdirs(File)
This methods had some workarounds for bugs that seem to be fixed
in Java 7 [1]. There seem to be other problems on shared file-systems
which are not really supported by lucene anyway or rather not
recommeded. Yet the current solution that interrupts a static thread
reference is too dangrous given all the usage of NIO across
elasticsearch.

[1] http://bugs.java.com/bugdatabase/view_bug.do?bug_id=4742723
2014-04-15 13:22:51 +02:00
Simon Willnauer 8dd5dd409e Remove FileSystemUtils#maxOpenFiles
This method basically forcefully creates as many files as possible
to find out the process limit in a brute-force manner. The number of
possible probles with this approach would exceed the number of lines
left on this commit message.

This commit uses a JMX based alternative to print the process limit.
2014-04-15 13:22:51 +02:00
Shay Banon bc5bdbc5de Remove jsr166y now that we on Java 7, cleanup jsr166e to classes we use 2014-04-15 13:17:28 +02:00
Simon Willnauer 8bede7024f Use TransportBulkAction for internal request from IndicesTTLService
This prevents executing bulks internal autocreate indices logic
and ensures that this internal request never creates an index
automaticall.

This fixes a bug where the TTL purger thread ran after the actual
index it was purging was already closed / deleted and that re-created
that index.

Closes #5766
2014-04-15 12:40:25 +02:00
Andrew Selden 7ef36d9d52 Separate benchmark API endpoints
Separates benchmark API endpoints into separate files according to API
funtionality. This makes it easier for our tests and clients.

Closes #5787
2014-04-14 17:36:43 -07:00
Igor Motov 3d23a71fa7 Fix snapshot status with empty repository
The snapshot status command with empty repository should return current status of currently running snapshots in all repositories.

Fixes #5790
2014-04-14 19:02:41 -04:00
Andrew Selden 2cf66c4115 Benchmark documentation
Moving benchmark documentation under the search section.

Closes #5786
2014-04-14 14:08:41 -07:00
Igor Motov 2ed8c632be Separate persistent and global metadata serialization settings 2014-04-14 16:25:33 -04:00
Simon Willnauer 27d4d76769 [BUILD] remove leftover print statement 2014-04-14 22:23:10 +02:00
Simon Willnauer 0564c883be Remove unused FileSystemUtils#copyFile 2014-04-14 21:48:27 +02:00
Simon Willnauer a215dd3ae8 Prevent fsync from creating 0-byte files
This is related to LUCENE-5570 where fsync creates a 0-byte file
if the file does not exists. This commit adds the patched lucene
version using Java 7 APIs as well as a note to replace this method
with the upcomeing IOUtils#fsync in Lucene 4.8

This commit cleans up FsImmutableBlobContainer#writeBlob to make
use of Java7 Auto-Closing features and ensures that the directory
the blob was written to is fsynced as well if possible.
2014-04-14 21:48:23 +02:00
Simon Willnauer 11bf13c363 Check for no open issues before build release 2014-04-14 18:54:04 +02:00
Simon Willnauer 754eb16835 Upgrade to Lucene 4.7.2 2014-04-14 18:30:03 +02:00
Adrien Grand e458d4fd93 Improved SearchContext.addReleasable.
For resources that have their life time effectively defined by the search
context they are attached to, it is convenient to use the search context to
schedule the release of such resources.

This commit changes aggregations to use this mechanism and also introduces
a `Lifetime` object that can be used to define how long the object should
live:
 - COLLECTION: if the object only needs to live during collection time and is
   what SearchContext.addReleasable would have chosen before this change
   (used for p/c queries),
 - SEARCH_PHASE for resources that only need to live during the current search
   phase (DFS, QUERY or FETCH),
 - SEARCH_CONTEXT for resources that need to live until the context is
   destroyed.

Aggregators are currently registed with SEARCH_CONTEXT. The reason is that when
using the DFS_QUERY_THEN_FETCH search type, they are allocated during the DFS
phase but only used during the QUERY phase. However we should fix it in order
to only allocate them during the QUERY phase and use SEARCH_PHASE as a life
time.

Close #5703
2014-04-14 17:42:41 +02:00
Adrien Grand e589301806 Make Releasable extend AutoCloseable.
Java7's AutoCloseable allows to manage resources more nicely using
try-with-resources statements. Since the semantics of our Releasable interface
are very close to a Closeable, let's switch to it.

Close #5689
2014-04-14 17:21:42 +02:00
Adrien Grand e688f445ad [TEST] Use indexRandom in ShardSizeTests. 2014-04-14 12:31:34 +02:00
Simon Willnauer 1ce56ff969 Revert "Don't lookup version for auto generated id and create"
This reverts commit dc73498454.
2014-04-14 12:15:02 +02:00
Shay Banon dc73498454 Don't lookup version for auto generated id and create
When a create document is executed, and its an auto generated id (based on UUID), we know that the document will not exists in the index, so there is no need to try and lookup the version from the index.
For many cases, like logging, where ids are auto generated, this can improve the indexing performance, specifically for lightweight documents where analysis is not a big part of the execution.
2014-04-14 10:06:53 +02:00
Peter Dyson f8537183b9 [DOCS] update old status of plugins 2014-04-13 20:18:19 -04:00
Simon Willnauer ad143e16cf [TEST] Fix ClusterStatsTests#testValuesSmokeScreen to wait for yellow to get reliable FS stats. 2014-04-12 23:02:31 +02:00
Simon Willnauer ec3c635696 [TEST] use a real upperbound for the check on the time spend during suggestions 2014-04-12 21:54:46 +02:00
Shay Banon e9c0dd9ae4 [Test] should be abstract 2014-04-12 16:14:58 +02:00
Simon Willnauer efb749936b [TEST] Improve performance of MockBigArray MockPageRecycler 2014-04-11 23:02:59 +02:00
Simon Willnauer 5d611a9098 Ensure pending merges are updated on segment flushes
Due to the default of `async_merge` to `true` we never run
the merge policy on a segment flush which prevented the
pending merges from being updated and that caused actual
pending merges not to contribute to the merge decision.

This commit also removes the `index.async.merge` setting is actually
misleading since we take care of merges not being excecuted on the
indexing threads  on a different level (the merge scheduler) since 1.1.

This commit also adds an additional check when to run a refresh
since soely relying on the dirty flag might leave merges un-refreshed
which can cause search slowdowns and higher memory consumption.

Closes #5779
2014-04-11 23:02:59 +02:00
Boaz Leskes e0fbd5df52 PR #5706 introduced a bug in the sparse array-backed field data
When we load sparse single valued data, we automatically assign a missing value to represent a document who has none. We try to find a value that will increase the number of bits needed to represent the data. If that missing value happen to be 0, we do no properly intialize the value array.

This commit solved this problem but also cleans up the code even more to make spotting such issues easier in the future.
2014-04-11 21:34:36 +02:00
Boaz Leskes 63d1fa45ab Added awaitFix for SimpleNestedTests.testSortNestedWithNestedFilter
While investigating failures
2014-04-11 18:12:35 +02:00
Malte Schirnacher 8ce3bba010 Fix typos in percolate.asciidoc
Close #5762 #5763 #5764
2014-04-11 18:09:16 +02:00
Boaz Leskes f549472fea Fixed- PackedArrayIndexFieldData.chooseStorageFormat compared to Long.MAX_VALUE instead of Long.MIN_VALUE
Also made the LongFieldDataTests.SINGLE_VALUED_SPARSE_RANDOM & LongFieldDataTests.MULTI_VALUED_SPARSE_RANDOM more sparse
2014-04-11 16:40:47 +02:00
Boaz Leskes 1d1ca3befc Added a AppendingDeltaPackedLongBuffer-based storage format to single value field data
The AppendingDeltaPackedLongBuffer uses delta compression in paged fashion. For data which is roughly monotonic this results in reduced memory signature.

By default we use the storage format expected to use the least memory. You can force a choice using a new field data setting `memory_storage_hint` which can be set to `ORDINALS`, `PACKED` or `PAGED`

Closes #5706
2014-04-11 15:50:34 +02:00
Chris Earle e8ea9d7585 Strengthening pseudo random number generator and adding tests to verify its behavior.
Closes #5454 and #5578
2014-04-11 14:01:40 +02:00
Martijn van Groningen 45a1b44759 Each search request should use a new InternalSearchResponse instance even in case when all shards return no hits.
The InternalSearchResponse may get modified afterwards, so a new instance required at all times.
2014-04-11 17:44:21 +07:00
Simon Willnauer 862611b792 [TEST] Prevent TTLPurger from recreating deleted index
Related to #5766
2014-04-11 09:03:28 +02:00
Martijn van Groningen 01794bf8ea Ignored clear scroll rest test 2014-04-11 13:47:41 +07:00