Today, we use ConcurrentMergeScheduler, and this can be painful since it is concurrent on a shard level, with a max of 3 threads doing concurrent merges. If there are several shards being indexed, then there will be a minor explosion of threads trying to do merges, all being throttled by our merge throttling.
Moving to serial merge scheduler will still maintain concurrency of merges across shards, as we have the merge thread pool that schedules those merges. It will just be a serial one on a specific shard.
Also, on serial merge scheduler, we now have a limit of how many merges it will do at one go, so it will let other shards get their fair chance of merging. We use the pending merges on IW to check if merges are needed or not for it.
Note, that if a merge is happening, it will not block due to a sync on the maybeMerge call at indexing (flush) time, since we wrap our merge scheduler with the EnabledMergeScheduler, where maybeMerge is not activated during indexing, only with explicit calls to IW#maybeMerge (see Merges).
closes#5447
Two changes:
1. In the StupidBackoffScorer only look for the trigram if there is a bigram.
2. Cache the frequencies in WordScorer so we don't look them up again and
again and again. This is implemented by wrapping the TermsEnum in a special
purpose wrapper that really only works in context of the WordScorer.
This provides a pretty substantial speedup when there are many candidates.
Closes#5395
* Index one at a time only rarely if doing more then 300.
* When launching async actions, take some care to make sure you don't already
have more then 150 other async actions in flight.
* When indexing in bulk split into chunks of 1000 documents.
Seen during CI tests, it could appears that the download service is not available for any reason.
This fix in test will check before each test which requires an internet access (annotated with @Network) that the download service we are testing is still working.
It won't fail the test but will mark the test as `Ignored` in case of failure.
As documented in systemd's manual pages tmpfiles.d(5) and systemd.unit(5),
a package should install its default configuration in /usr/lib, which can
be overriden by system administrators in /etc.
New locations in the rpm:
/usr/lib/systemd/system/elasticsearch.service
/usr/lib/tmpfiles.d/elasticsearch.conf
When building elasticsearch, we now require to use java 1.7.
Maven will check that before compiling any class. If Java version is incorrect, you will get the following message:
```
[WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireJavaVersion failed with message:
Detected JDK Version: 1.6.0-65 is not in the allowed range [1.7,).
```
Closes#5428.
if the async sendPingsHandler throws an unexpected exception the
corresponding latch is never counted down. This might only happen
during node shutdown but can still cause starvation or test failures.
If we want to have a full picture of versions running in a cluster, we need to add a `_cat/plugins` endpoint.
Response could look like:
```sh
% curl es2:9200/_cat/plugins?v
node component version type url desc
es1 mapper-attachments 1.7.0 j Adds the attachment type allowing to parse difference attachment formats
es1 lang-javascript 1.4.0 j JavaScript plugin allowing to add javascript scripting support
es1 analysis-smartcn 1.9.0 j Smart Chinese analysis support
es1 marvel 1.1.0 j/s http://localhost:9200/_plugins/marvel Elasticsearch Management & Monitoring
es1 kopf 0.5.3 s http://localhost:9200/_plugins/kopf kopf - simple web administration tool for ElasticSearch
es2 mapper-attachments 2.0.0.RC1 j Adds the attachment type allowing to parse difference attachment formats
es2 lang-javascript 2.0.0.RC1 j JavaScript plugin allowing to add javascript scripting support
es2 analysis-smartcn 2.0.0.RC1 j Smart Chinese analysis support
```
Closes#4824.
Fixed minor logging discrepancies introduced with randomized shard count.
Added logging to recoverWhileUnderLoadWithNodeShutdown
Added logging ElasticsearchIntegrationTest.allowNodes to indicate what nodes were excluded.
recoverWhileRelocating's shard setting were potentially ignored (depending on key order in hashmaps)
When you set a BulkProcessor with a bulk actions size of 100, it executes the bulk after 101 documents.
```java
BulkProcessor.builder(client(), listener).setBulkActions(100).setConcurrentRequests(1).setName("foo").build();
```
Same for size. If you set the bulk size to 1024 bytes, it will actually execute the bulk after 1025 bytes.
This patch fix it.
Closes#4265.
We have the default QueryWrapperFilter as well as our custom one while
our wrapper is explicitly marked as no_cache such that it will never
be included in a cache. This was not consistenly used and caused several
problems during tests where p/c related queries were used as filters
and ended up in the cache. This commit adds the QueryWrapperFilter
ctor to the forbidden APIs to enforce the query instance checks.
Some mappers do not support externalValue() to be set. So plugin developers can't use it while building their own mappers.
Support added in this PR for:
* `BinaryFieldMapper`
* `BooleanFieldMapper`
* `GeoPointFieldMapper`
* `GeoShapeFieldMapper`
Closes#4986.
Relative to #4154.
Due to bugs in jvm (specifically OSX), running zen discovery tests causes for "socket close" failure on receive on multicast socket, and under some jvm versions, even crashes. This happens because of the creation of multiple multicast sockets within the same VM. In practice, in our tests, we use the same settings, so we can share the same multicast socket across multiple channels.
This change creates an abstraction called MulticastChannel, that can be shared, with ref counting. Today, the shared option is only enabled under OSX.
closes#5410
A missing call to ArrayUtil.grow prevented the array that stores the values
from growing in case the number of values returned by the script was higher
than the original size of the array.
Close#5414
Seen during CI tests, it could appears that the download service is not available for any reason.
This fix in test will check before each test which requires an internet access (annotated with @Network) that the download service we are testing is still working.
It won't fail the test but will mark the test as `Ignored` in case of failure.