Both package types, RPM and deb now contain an option to not restart on upgrade.
This option can be configure in /etc/default/elasticsearch for dpkg based systems
and /etc/sysconfig/elasticsearch for rpm based systems.
By default the setting is as before, where a restart is executed on upgrade.
Closes#3685
In case of retries, we update the clusterState and shardsIt, make sure they are visible using volatile (even though updates will probably go through a memory barrier, this might explain rare failure we see when retry happens)
Introduced ElasticsearchAssertions that check for failures in all response
Added comment to flush requests as we don't check for failures there
Attempt to remove Thread.sleep in testChangeInitialShardsRecovery in favour of awaitBusy block
Added awaitBusy block in testQuorumRecovery waiting for YELLOW. We could have GREEN after the close node but wa want to wait for the YELLOW state.
As we are within an awaitBusy block, it doesn't make sense to have an assertion, since it would fail the test instead of waiting till the condition is verified (till timeout expires)
In 0.90.4, we deprecated some code:
* `GetIndexTemplatesRequest#GetIndexTemplatesRequest(String)` moved to `GetIndexTemplatesRequest#GetIndexTemplatesRequest(String...)`
* `GetIndexTemplatesRequest#name(String)` moved to `GetIndexTemplatesRequest#names(String...)`
* `GetIndexTemplatesRequest#name()` moved to `GetIndexTemplatesRequest#names()`
* `GetIndexTemplatesRequestBuilder#GetIndexTemplatesRequestBuilder(IndicesAdminClient, String)` moved to `GetIndexTemplatesRequestBuilder#GetIndexTemplatesRequestBuilder(IndicesAdminClient, String...)`
* `IndicesAdminClient#prepareGetTemplates(String)` moved to `IndicesAdminClient#prepareGetTemplates(String...)`
* `AbstractIndicesAdminClient#prepareGetTemplates(String)` moved to `AbstractIndicesAdminClient#prepareGetTemplates(String...)`
We can now remove that old methods in 1.0.
**Note**: it breaks the Java API
Relative to #2532.
Closes#3681.
/_template shows: No handler found for uri [/_template] and method [GET]
It would make sense to list the templates as they are listed in the /_cluster/state call.
Closes#2532.
This commit adds a test that checks that we are closing
all files even if there are random IOExceptions happening. The test
found several places where due to IOExceptions streams were not
closed due to unsufficient exception handling.
MockDirectoryWrapper adds asserting logic to the low level directory
implementation that helps to track and catch resource leaks like
unclosed index inputs caused by dangling IndexReader or IndexSearcher
instances. It prevents double writes to files and allows low level
random exceptions to be thrown for testing index consistency etc.
Closes#3654
Tests now support ignoring integartion tests by passing
'-Dtests.integration=false' to the test runner either via IDE or
maven to only run fast unittests.
Dynamic mapping allow to dynamically introduce new fields into an existing mapping. There is a (pretty rare) race condition, where a new field/object being introduced will not be immediately visible for another document that introduces it at the same time.
closes#3667, closes#3544
Improved LoggingListener (which reads and applies @TestLogging annotation) to take into account the logger prefix. We can now use the @TestLogging annotation and specify either the whole package name (e.g. o.e.action.metadata) or only the package name without the logger prefix (action.metadata), the custom log level will be properly applied in both cases
Enabled trace logging for RecoveryPercolatorTests#testMultiPercolator_recovery test
Cleaned up and removed unnecessary usage of concurrent collections.
The clear scroll api allows clear all resources associated with a `scroll_id` by deleting the `scroll_id` and its associated SearchContext.
Closes#3657
Now with have proper exit codes for elasticsearch plugin manager (see #3463), we can add a silent mode to plugin manager.
```sh
bin/plugin --install karmi/elasticsearch-paramedic --silent
```
Closes#3628.
Based on recent bugs ( #3652 ) where searchers were acquired multiple times
but never released 'IndexShard#searcher()' has not a more accurate name.
Closes#3653
When a node shuts down thread pools might throw
ESRejectedExecutionException and our test framework fails tests if
exceptions are not caught hitting uncaught exception handler.
When installing a plugin, we use:
```sh
bin/plugin --install groupid/artifactid/version
```
But when removing the plugin, we only support:
```sh
bin/plugin --remove dirname
```
where `dirname` is the directory name of the plugin under `/plugins` dir.
Closes#3421.
The inner call to the completion stats pulled a second searcher
that never got released causing the underlying readers to never
get closed unless the node is shut down. This was triggered
with literally each stats call including the completion stats
even if no completion service was used on the index.
Closes#3652
This commit adds two main pieces, the first is a ClusterInfoService
that provides a service running on the master nodes that fetches the
total/free bytes for each data node in the cluster as well as the
sizes of all shards in the cluster. This information is gathered by
default every 30 seconds, and can be changed dynamically by setting
the `cluster.info.update.interval` setting. This ClusterInfoService
can hopefully be used in the future to weight nodes for allocation
based on their disk usage, if desired.
The second main piece is the DiskThresholdDecider, which can disallow
a shard from being allocated to a node, or from remaining on the node
depending on configuration parameters. There are three main
configuration parameters for the DiskThresholdDecider:
`cluster.routing.allocation.disk.threshold_enabled` controls whether
the decider is enabled. It defaults to false (disabled). Note that the
decider is also disabled for clusters with only a single data node.
`cluster.routing.allocation.disk.watermark.low` controls the low
watermark for disk usage. It defaults to 0.70, meaning ES will not
allocate new shards to nodes once they have more than 70% disk
used. It can also be set to an absolute byte value (like 500mb) to
prevent ES from allocating shards if less than the configured amount
of space is available.
`cluster.routing.allocation.disk.watermark.high` controls the high
watermark. It defaults to 0.85, meaning ES will attempt to relocate
shards to another node if the node disk usage rises above 85%. It can
also be set to an absolute byte value (similar to the low watermark)
to relocate shards once less than the configured amount of space is
available on the node.
Closes#3480