Whether or not the stacktrace is displayed is controlled by bootstrap
log level setting, so that bootstrap: DEBUG displays the stack trace on
output, like it does on log
Closes#5102
After upgrading to 1.1.0, sending null values to geo points produces the following error:
```
MapperParsingException[failed to parse]; nested: ElasticsearchParseException[geo_point expected];
```
Closes#5680.
Closes#5681.
This test initially had three purposes:
- duels between equivalent aggregations on significant amounts of data,
- make sure that array growth works (when the number of buckets grows larger
than the initial number o buckets),
- make sure that page recycling works correctly.
Because of the last point, it needed large numbers of docs/terms since page
recycling only kicks in on arrays of more than 16KB. However, since then, we
added a MockPageCacheRecycler to track allocation/release of BigArrays and make
sure that all arrays get released, so we can now lower these numbers of docs/
terms to just make sure that array growth is triggered.
Those tests use RandomIW which can flush after each document taking forever
to index the 20k docs it was indexing. This commit makes sure the IW is
sane and the number of docs is not too crazy.
The default precision was way too exact and could lead people to
think that geo context suggestions are not working. This patch now
requires you to set the precision in the mapping, as elasticsearch itself
can never tell exactly, what the required precision for the users
suggestions are.
Closes#5621
There is no explicit method `flush/execute` in `BulkProcessor` class. This can be useful in certain scenarios.
Currently it requires to close and create a new BulkProcessor if one wants an immediate flush.
Closes#5575.
Closes#5570.
Instead the default implementation is used, but on top of empty
(Bytes|Long|Double|GeoPoint)Values. This makes sure there is no
inconsistency between documents depending on whether other documents in the
segment have values or not.
Close#5646
When a bulk request triggers an index creation a mapping might not be
created. The reason is that when indexing documents in a bulk,
an indexing operation might fail due to a shard not yet being
started. The mapping service, however, might already
have the mapping but the mapping update is never issued to the master,
even on subsequent indexing of documents.
Instead, the mapping must be propagated to master even if the
indexing fails due to a shard not being started.
closes#5623
A frequency caching terms enum, that also allows to be configured with an optional filter. To be used by both significant terms and phrase suggester.
This change extracts the frequency caching into the same code, and allow in the future to add a filter to control/customize the background frequencies
Closes#5597
Moved the circuit breaker memory reducing logic to the IndicesFieldDataCacheListener, so it always reduces the memory used when field data gets unloaded,
this fixes a issue where the circuit breaker didn't get reduced when segments where no shardId could be resolved get cleared up.
Also made sure that exceptions in the percolator service are bubbled up properly.
Closes#5588
* Removed XTermsFilter fixed in LUCENE-5502
* Switched back to automaton queries that caused failures due to LUCENE-5532
* Fixed Highlight test that has different results due to LUCENE-5538
Currently we throw misleading exception in acquireSearcher
if we try to acquire while we are failing the engine. We should
throw an EngineClosedException instead.
Relates to #5633
When refresh failed, it would fail due to a serious issue in the shard (mainly corrupted index). Fail the engine in that cause, which will cause the shard to fail. The reason why its not only on CorruptedIndex failed is that any type of failure seems to be relevant here to fail the shard
closes#5633
Update RelocationTests to use the above and unified testPrimaryRelocationWhileIndexing & testReplicaRelocationWhileIndexingRandom into a single randomized test.
This commit speeds up aggregations tests dramatically since they all
rely on the same index structure. Here we can safe a large amout of time
to not recreate the index for each small test.