When we know the ID for the document we are about to index was
auto-generated, we don't need to acquire/release the per-ID lock,
which might provide small speedups during highly concurrent indexing.
Closes#6584
In order to have access to all codecs and handlers by netty, they
need to be exposed during shading, otherwise only the classes, which
are used by the built are exposed.
The match_phrase_prefix provided the same explanation as the match_phrase
query. There was no indication that the last term was run as a prefix
query.
This change marks the last term (or terms if there are multiple terms
in the same position) with a *
Closes#2449
During phase1 we copy over all lucene segments. These may refer to mapping updates that are still queued up to be sent to master. We must make sure those pending updates are processed before completing the relocation.
Relates to #6648Closes#6762
we need to make sure we wait for all threads to finish executing, since there might still be a thread around even post await (i.e. just starting) performing updates
the pending tasks api will now include the current executing tasks (with a proper marker boolean flag)
this will also help in tests that wait for no pending tasks, to also wait till the current executing task is done
closes#6744
When we have many new fields keep being introduced, the immutable open map we used becomes more and more expensive because of its clone characteristics, and we use it in several places.
The usage semantics of it allows us to also use a CHM if we want to, but it would be nice to still maintain the concurrency aspects of volatile immutable map when the number of fields is sane.
Introduce a new map like data structure, that can switch internally to CHM when a certain threshold is met.
Also add a benchmark class to exploit the many new field mappings use case, which shows significant gains by using this change, to a level where mapping introduction is no longer a significant bottleneck.
closes#6707
This method tires to optimize boolean queries if there is only
one clause. Yet BooleanQuery already does that internally This
optimization is unneeded.
Closes#6743
If you try to remove a plugin in read only dir, you get a successful result:
```
$ bin/plugin --remove marvel
-> Removing marvel
Removed marvel
```
But actually the plugin has not been removed.
When installing, if fails properly:
```
$ bin/plugin -i elasticsearch/marvel/latest
-> Installing elasticsearch/marvel/latest...
Failed to install elasticsearch/marvel/latest, reason: plugin directory /usr/local/elasticsearch/plugins is read only
```
This change throw an exception when we don't succeed removing the plugin.
Closes#6546.
Adding code to test for unset plugin names to fail fast with descriptive error messages. Also simplified the series of `if` statements checking for the commands by using a `switch` (now that it's using Java 7), added tests, and updated random exceptions with the up-to-date flag names (e.g., "--verbose" instead of "-verbose").
Closes#5976.
Closes#6013.
This causes a NPE since XContentStructure checks if the query is null
and takes this as the condition to parse from the byte source which is
actually null in that case.
Closes#6722
In this test we only index a handful of docs so if we have more shards
than docs we might fail on the `assertSearchResult` since not all shards
are started but results are just fine.
Today, we take great care to try and share the same analyzer instances across shards and indices (global analyzer). The idea is to share the same analyzer so the thread local resource it has will not be allocated per analyzer instance per thread.
The problem is that AnalyzerWrapper keeps its resources on its own per thread storage, and with per field reuse strategy, it causes for per field per thread token stream components to be used. This is very evident with the StandardTokenizer that uses a buffer...
This came out of test with "many fields", where the majority of 1GB heap was consumed by StandardTokenizer instances...
closes#6714
After a node joins the clusters, it starts pinging the master to verify it's health. Before, the cluster join request was processed async and we had to give some time to complete. With #6480 we changed this to wait for the join process to complete on the master. We can therefore start pinging immediately for fast detection of failures. Similar change can be made to the Node fault detection from the master side.
Closes#6706
The change introduced in #6686 checks for ConnectionTransportException during pinging. However, transport layer wraps it in SendRequestTransportException