The blacklist can be provided through -Dtests.rest.blacklist and supports a comma separated list of globs
e.g. -Dtests.rest.blacklist=get/10_basic/*,index/*/*
Also added some missing docs and made it clearer that the suite/test descriptions effectively contains their (relative) path (api/yaml_file/test section)
Closes#5881
This is a fix for a bug whereby a cluster that has no nodes started with
-Des.node.bench=true will cause clients to hang if they attempt to
submit a benchmark.
Also adds REST tests to validate fix
Closes#5754
we use the "local host" address in sevearl places in our networking layer, if local host is not resolved for some reason, still continue and operate but using the loopback interface
Today we first get a reference to the IndexSearcher in #acquireSearcher
and then futher down we try to run Store#incRef() which might throw an
exception if the store is already closed. There is a small window
that allows this to happen during InternalEngine#close() when we try
to acquire the searcher at the same time and the engine is the last
resource that holds a reference to the store.
This commit only affects unreleased code since the Store's ref counting
has not yet been released.
The test starts a single node, indexes into, restarts the node and checks that no data was lost. It only indexed into 2 shards and didn't wait for green meaning that the node could be restarted with non-started primary. In that case the node will not re-assign the primary as it was not started. This commit makes sure that we either wait for primaries to start or index into all shards which has the same net effect.
Also extending some logging in InternalIndexShard.
The test verifies that stats are measure by checking timeInMillis>0. On fast machines the suggestions are done in < 1 millis time. The tests now index documents (to power suggestions) and does multiple suggestions per iterations to slow things down.
When a replication operation (index/delete/update) fails to be executed properly, we fail the replica and allow master to allocate a new copy of it. At the moment, the node hosting the primary shard is responsible of notifying the master of a failed replica. However, if the replica shard is initializing (`POST_RECOVERY` state), we have a racing condition between the failed shard message and moving the shard into the `STARTED` state. If the latter happen first, master will fail to resolve the fail shard message.
This commit builds on #5800 and fails the engine of the replica shard if a replication operation fails. This protects us against the above as the shard will reject the `STARTED` command from master. It also makes us more resilient to other racing conditions in this area.
Closes#5847
A merge (and refresh) might rarely happen in the background between the two queries whose output is compared. It might then happen that two docs with same scores get returned by the two queries in a different order due to different lucene document id (which has changed in the meantime). To fix this we need to order by id when the score is the same, so that we can safely compare the output of the two queries (multimatch and dismax).
TTLPercolatorTests indexes docs with small TTLs which can trigger
AlreadyExpiredException exception. This is expected while rare and
we should just catch them.
Until today we did close the engine without aqcuireing the write lock
since most calls were still holding a read lock. This commit removes
the code that holds on to the readlock when failing the engine which
means we can simply call #close()
This test requires a mapping since otherwise if there is no mapping
added the percolator query might not be parsed as a query on a numeric
field since the query might arrive on a node before the dynamic mapping
reached that node.
This commit also moves the `indexService.readAllowed()` call up before
the number of percolation queries is check to make sure we fail if reads
are not allowed - there might be a query in-flight which means we need
to check another node rather than return an empty result.
Load tests showed that SerialMS has problems to keep up with
the merges under high load. We should switch back to CMS
until we have a better story to balance merge
threads / efforts across shards on a single node.
Closes#5817
If the during percolating a new field was introduced
in the local mapping service, then those changes should
be updated in cluster state of the master as well.
Closes#5776
This methods had some workarounds for bugs that seem to be fixed
in Java 7 [1]. There seem to be other problems on shared file-systems
which are not really supported by lucene anyway or rather not
recommeded. Yet the current solution that interrupts a static thread
reference is too dangrous given all the usage of NIO across
elasticsearch.
[1] http://bugs.java.com/bugdatabase/view_bug.do?bug_id=4742723
This method basically forcefully creates as many files as possible
to find out the process limit in a brute-force manner. The number of
possible probles with this approach would exceed the number of lines
left on this commit message.
This commit uses a JMX based alternative to print the process limit.
This prevents executing bulks internal autocreate indices logic
and ensures that this internal request never creates an index
automaticall.
This fixes a bug where the TTL purger thread ran after the actual
index it was purging was already closed / deleted and that re-created
that index.
Closes#5766
This is related to LUCENE-5570 where fsync creates a 0-byte file
if the file does not exists. This commit adds the patched lucene
version using Java 7 APIs as well as a note to replace this method
with the upcomeing IOUtils#fsync in Lucene 4.8
This commit cleans up FsImmutableBlobContainer#writeBlob to make
use of Java7 Auto-Closing features and ensures that the directory
the blob was written to is fsynced as well if possible.
For resources that have their life time effectively defined by the search
context they are attached to, it is convenient to use the search context to
schedule the release of such resources.
This commit changes aggregations to use this mechanism and also introduces
a `Lifetime` object that can be used to define how long the object should
live:
- COLLECTION: if the object only needs to live during collection time and is
what SearchContext.addReleasable would have chosen before this change
(used for p/c queries),
- SEARCH_PHASE for resources that only need to live during the current search
phase (DFS, QUERY or FETCH),
- SEARCH_CONTEXT for resources that need to live until the context is
destroyed.
Aggregators are currently registed with SEARCH_CONTEXT. The reason is that when
using the DFS_QUERY_THEN_FETCH search type, they are allocated during the DFS
phase but only used during the QUERY phase. However we should fix it in order
to only allocate them during the QUERY phase and use SEARCH_PHASE as a life
time.
Close#5703
Java7's AutoCloseable allows to manage resources more nicely using
try-with-resources statements. Since the semantics of our Releasable interface
are very close to a Closeable, let's switch to it.
Close#5689
When a create document is executed, and its an auto generated id (based on UUID), we know that the document will not exists in the index, so there is no need to try and lookup the version from the index.
For many cases, like logging, where ids are auto generated, this can improve the indexing performance, specifically for lightweight documents where analysis is not a big part of the execution.
Due to the default of `async_merge` to `true` we never run
the merge policy on a segment flush which prevented the
pending merges from being updated and that caused actual
pending merges not to contribute to the merge decision.
This commit also removes the `index.async.merge` setting is actually
misleading since we take care of merges not being excecuted on the
indexing threads on a different level (the merge scheduler) since 1.1.
This commit also adds an additional check when to run a refresh
since soely relying on the dirty flag might leave merges un-refreshed
which can cause search slowdowns and higher memory consumption.
Closes#5779
When we load sparse single valued data, we automatically assign a missing value to represent a document who has none. We try to find a value that will increase the number of bits needed to represent the data. If that missing value happen to be 0, we do no properly intialize the value array.
This commit solved this problem but also cleans up the code even more to make spotting such issues easier in the future.
The AppendingDeltaPackedLongBuffer uses delta compression in paged fashion. For data which is roughly monotonic this results in reduced memory signature.
By default we use the storage format expected to use the least memory. You can force a choice using a new field data setting `memory_storage_hint` which can be set to `ORDINALS`, `PACKED` or `PAGED`
Closes#5706
Add an API endpoint at /_bench for submitting, listing, and aborting
search benchmarks. This API can be used for timing search requests,
subject to various user-defined settings.
Benchmark results provide summary and detailed statistics on such
values as min, max, and mean time. Values are reported per-node so that
it is easy to spot outliers. Slow requests are also reported.
Long running benchmarks can be viewed with a GET request, or aborted
with a POST request.
Benchmark results are optionally stored in an index for subsequent
analysis.
Closes#5407
In some cases when we have a lot of docs with lots of shards
recovery takes longer than 1m causing the tests to fail before all
shards are recovered. This commit raises the timeout in this test to
5m max while it's rarely needed.
This commit also adds an assertion to ElasticsearchAssertions that
ensures that the cluster health requests are not hitting a timeout.
After a shard fails on a node we assign a new replica on another node. This is important in order to avoid failing again due to node specific problems. In the rare case where two different replicas of the same shard failed in a short time span, we may fail to do so and assign one of them back to the node it's currently on. This happens if both shard failed events are processed within the same batch on the master.
Closes#5725
In TransportSearchTypeAction we need to increment successful ops
first before we increment and compare the exit condition otherwise if we
are fast we could concurrently update totalOps but then preempt one
of the threads which can cause the successor to read a wrong value from
successfulOps if second phase is very fast ie. searchType == count etc.
This can cause wrong success stats in the search response.
ElasticsearchRestTests extends now ElasticsearchIntegrationTest and makes use of our ordinary test infrastructure, in particular all randomized aspects now come for free instead of having to maintain a separate (custom) tests runner
We previously parsed only the tests that needed to be run given the version of the cluster the tests are running against. This doesn't happen anymore as it didn't buy much and it would be harder to support as the tests get now parsed before the test cluster gets started. Thus all the tests are now parsed regardless of their skip sections, afterwards the ones that don't need to be run will be skipped through assume directives.
Fixed REST tests that rely on a specific number of shards as this change introduces also random number of shards and replicas (through randomIndexTemplate)
Closes#5654
Refactor the rest layer handlers to simplify common code paths (like handling) failures, and introduce optional (enabled for netty) rest channel bytes recycling
- consolidated value source parsing under a single parser that is reused in all the value source aggs parsers
- consolidated include/exclude parsing under a single parser
- cleaned up value format handling, to have consistent behaviour across all values source aggs
since we use a shared cluster, calling clear on the mock array / page recycler can cause removing a valid on going reference, and then when its released, the release will fail because it can't be found.
There is no real reason to call clear, checking if pages/arrays have been released takes the snapshot behavior here into account.
This change also makes sure we don't use the mock classes in places where we don't really release.
Note, with this change DoubleTermsTests fails, since it causes failures when creating aggs in the pre process phase, causing obtained arrays not to be released. This needs to be fixed before pulling this change in.
create a new releasable bytes output, that can be recycled, and use it in netty and the translog, 2 areas where the recycling will help nicely.
Note, opted for statically typed enforced releasble bytes output, to make sure people take the extra care to control when the bytes reference are released.
Also, the mock page/array classes were fixed to not take into account potential recycling going during teardown, for example, on a shared cluster ping requests still happen, so recycling happen actively during teardown.
closes#5691
Whether or not the stacktrace is displayed is controlled by bootstrap
log level setting, so that bootstrap: DEBUG displays the stack trace on
output, like it does on log
Closes#5102
After upgrading to 1.1.0, sending null values to geo points produces the following error:
```
MapperParsingException[failed to parse]; nested: ElasticsearchParseException[geo_point expected];
```
Closes#5680.
Closes#5681.
This test initially had three purposes:
- duels between equivalent aggregations on significant amounts of data,
- make sure that array growth works (when the number of buckets grows larger
than the initial number o buckets),
- make sure that page recycling works correctly.
Because of the last point, it needed large numbers of docs/terms since page
recycling only kicks in on arrays of more than 16KB. However, since then, we
added a MockPageCacheRecycler to track allocation/release of BigArrays and make
sure that all arrays get released, so we can now lower these numbers of docs/
terms to just make sure that array growth is triggered.
Those tests use RandomIW which can flush after each document taking forever
to index the 20k docs it was indexing. This commit makes sure the IW is
sane and the number of docs is not too crazy.
The default precision was way too exact and could lead people to
think that geo context suggestions are not working. This patch now
requires you to set the precision in the mapping, as elasticsearch itself
can never tell exactly, what the required precision for the users
suggestions are.
Closes#5621
There is no explicit method `flush/execute` in `BulkProcessor` class. This can be useful in certain scenarios.
Currently it requires to close and create a new BulkProcessor if one wants an immediate flush.
Closes#5575.
Closes#5570.
Instead the default implementation is used, but on top of empty
(Bytes|Long|Double|GeoPoint)Values. This makes sure there is no
inconsistency between documents depending on whether other documents in the
segment have values or not.
Close#5646
When a bulk request triggers an index creation a mapping might not be
created. The reason is that when indexing documents in a bulk,
an indexing operation might fail due to a shard not yet being
started. The mapping service, however, might already
have the mapping but the mapping update is never issued to the master,
even on subsequent indexing of documents.
Instead, the mapping must be propagated to master even if the
indexing fails due to a shard not being started.
closes#5623
A frequency caching terms enum, that also allows to be configured with an optional filter. To be used by both significant terms and phrase suggester.
This change extracts the frequency caching into the same code, and allow in the future to add a filter to control/customize the background frequencies
Closes#5597
Moved the circuit breaker memory reducing logic to the IndicesFieldDataCacheListener, so it always reduces the memory used when field data gets unloaded,
this fixes a issue where the circuit breaker didn't get reduced when segments where no shardId could be resolved get cleared up.
Also made sure that exceptions in the percolator service are bubbled up properly.
Closes#5588
* Removed XTermsFilter fixed in LUCENE-5502
* Switched back to automaton queries that caused failures due to LUCENE-5532
* Fixed Highlight test that has different results due to LUCENE-5538
Currently we throw misleading exception in acquireSearcher
if we try to acquire while we are failing the engine. We should
throw an EngineClosedException instead.
Relates to #5633
When refresh failed, it would fail due to a serious issue in the shard (mainly corrupted index). Fail the engine in that cause, which will cause the shard to fail. The reason why its not only on CorruptedIndex failed is that any type of failure seems to be relevant here to fail the shard
closes#5633
Update RelocationTests to use the above and unified testPrimaryRelocationWhileIndexing & testReplicaRelocationWhileIndexingRandom into a single randomized test.
This commit speeds up aggregations tests dramatically since they all
rely on the same index structure. Here we can safe a large amout of time
to not recreate the index for each small test.