Commit Graph

47 Commits

Author SHA1 Message Date
Bukhtawar cd304c4def Auto-release flood-stage write block (#42559)
If a node exceeds the flood-stage disk watermark then we add a block to all of
its indices to prevent further writes as a last-ditch attempt to prevent the
node completely exhausting its disk space. However today this block remains in
place until manually removed, and this block is a source of confusion for users
who current have ample disk space and did not even realise they nearly ran out
at some point in the past.

This commit changes our behaviour to automatically remove this block when a
node drops below the high watermark again. The expectation is that the high
watermark is some distance below the flood-stage watermark and therefore the
disk space problem is truly resolved.

Fixes #39334
2019-08-07 11:03:53 +01:00
David Roberts 13cb0fb98b
Periodically try to reassign unassigned persistent tasks (#36069)
Previously persistent task assignment was checked in the
following situations:

- Persistent tasks are changed
- A node joins or leaves the cluster
- The routing table is changed
- Custom metadata in the cluster state is changed
- A new master node is elected

However, there could be situations when a persistent
task that could not be assigned to a node could become
assignable due to some other change, such as memory
usage on the nodes.

This change adds a timed recheck of persistent task
assignment to account for such situations.  The timer
is suspended while checks triggered by cluster state
changes are in-flight to avoid adding burden to an
already busy cluster.

Closes #35792
2018-12-13 09:15:27 +00:00
debadair c9e03e6ead
[DOCS] Reworked the shard allocation filtering info. (#36456)
* [DOCS] Reworked the shard allocation filtering info. Closes #36079

* Added multiple index allocation settings example back.

* Removed extraneous space
2018-12-11 07:44:57 -08:00
Gordon Brown 3c4953f4d1
State default shard limit is not a recommendation (#36093)
The new limit on the number of open shards in a cluster may be
interpreted by users as a sizing recommendation, but it is not. This
clarifies in the documentation that this is a safety limit, not a
recommendation.
2018-11-30 13:05:14 -07:00
Gordon Brown 119835decd
Always enforce cluster-wide shard limit (#34892)
This removes the option to run a cluster without enforcing the
cluster-wide shard limit, making strict enforcement the default and only
behavior.  The limit can still be adjusted as desired using the cluster
settings API.
2018-11-26 17:05:12 -07:00
Gordon Brown da20dfd81c
Add cluster-wide shard limit warnings (#34021)
In a future major version, we will be introducing a soft limit on the
number of shards in a cluster based on the number of nodes in the
cluster. This limit will be configurable, and checked on operations
which create or open shards and issue a warning if the operation would
take the cluster over the limit.

There is an option to enable strict enforcement of the limit, which
turns the warnings into errors.  In a future release, the option will be
removed and strict enforcement will be the default (and only) behavior.
2018-10-23 16:35:10 -06:00
Gordon Brown dd3fe92673
[DOCS] Note that User Cluster Metadata is not private (#34156)
As user-defined cluster metadata is accessible to anyone with access to
get the cluster settings, stored in the logs, and likely to be tracked
by monitoring solutions, it is useful to clarify in the documentation
that it should not be used to store secret information.
2018-10-02 13:36:13 -06:00
Gordon Brown cfd3fa72ed
Add user-defined cluster metadata (#33325)
Adds a place for users to store cluster-wide data they wish to associate
with the cluster via the Cluster Settings API. This is strictly for
user-defined data, Elasticsearch makes no other other use of these
settings.
2018-09-04 16:14:18 -06:00
David Turner d553a8be2f
Improve docs for disk watermarks (#30249)
* Clarify that the low watermark does not affect brand-new shards.
* Replace ES -> Elasticsearch.
* Format to 80 columns.

Resolves #25163
2018-04-30 17:31:11 +01:00
Mayya Sharipova 5dcfdb09cb
Control max size and count of warning headers (#28427)
Control max size and count of warning headers

Add a static persistent cluster level setting
"http.max_warning_header_count" to control the maximum number of
warning headers in client HTTP responses.
Defaults to unbounded.

Add a static persistent cluster level setting
"http.max_warning_header_size" to control the maximum total size of
warning headers in client HTTP responses.
Defaults to unbounded.

With every warning header that exceeds these limits,
a message will be logged in the main ES log,
and any more warning headers for this response will be
ignored.
2018-04-13 05:55:33 -04:00
Tanguy Leroux 6c3278b8e8 [Docs] Fix missing closing block in cluster/misc.asciidoc 2018-03-22 12:02:53 +01:00
Tanguy Leroux edf27a599e
Add new setting to disable persistent tasks allocations (#29137)
This commit adds a new setting `cluster.persistent_tasks.allocation.enable`
that can be used to enable or disable the allocation of persistent tasks.
The setting accepts the values `all` (default) or `none`. When set to
none, the persistent tasks that are created (or that must be reassigned)
won't be assigned to a node but will reside in the cluster state with
a no "executor node" and a reason describing why it is not assigned:

```
"assignment" : {
  "executor_node" : null,
  "explanation" : "persistent task [foo/bar] cannot be assigned [no
  persistent task assignments are allowed due to cluster settings]"
}
```
2018-03-22 09:18:07 +01:00
David Turner 7608480a62
Update allocation awareness docs (#29116)
Update allocation awareness docs

Today, the docs imply that if multiple attributes are specified the the
whole combination of values is considered as a single entity when
performing allocation. In fact, each attribute is considered separately. This
change fixes this discrepancy.

It also replaces the use of the term "awareness zone" with "zone or domain", and
reformats some paragraphs to the right width.

Fixes #29105
2018-03-19 07:04:47 +00:00
Nik Everett 66ff1b2a59
Tests: Wipe cluster settings after every test (#28410)
Cluster settings shouldn't leak into the next test.

I played with failing the test if it left over any settings but that
felt like it added more ceremony then it was worth. The advantage is
that any test that intentionally wants to leave settings in place after
the test would fail and require looking at but, so far as I can tell, we
don't have any such tests.
2018-01-29 11:47:04 -05:00
Nik Everett 3d19006cfa
Docs: Clear watermarks after setting them (#28402)
Clear the disk watermark after the snippet showing users how to set it.
Without this our tests will fail if the disks have less than 10GB free.

Closes #28325
2018-01-26 15:42:53 -05:00
Glen Smith 94cfc2a0df [Docs] Fix explanation of "cluster.routing.allocation.exclude" (#27735) 2017-12-13 17:26:13 +01:00
Aron Szanto 316cb42b21 Update shards_allocation.asciidoc (#26019)
Slight language and consistency updates in shard balancing heuristics
2017-08-03 11:27:02 +02:00
Clinton Gormley 3e568f52c1 Fixed asciidoc formatting 2017-07-27 15:55:52 +02:00
Jason Tedor e165c405ac Add an underscore to flood stage setting
This is a minor nitty bikeshedding change that renames the suffix of the
disk flood stage setting to "flood_stage" from "floodstage".

Relates #25659
2017-07-11 22:02:00 -04:00
Herman Schaaf 977712f977 Change small typo in shards_allocation.asciidoc (#25643) 2017-07-11 11:25:49 +02:00
Jason Tedor 8148e25087 Fix disk allocator docs
This commit fixes the disk allocator docs which were broken due to the
inadvertent removal of some docs snippet markup.
2017-07-07 22:11:09 -04:00
Jason Tedor bc22c1c286 Add disk threshold settings validation
This commit adds cross-settings validation for the low/high/flood stage
disk watermark settings. This validation was enabled by the introduction
of multiple settings validation.

Relates #25600
2017-07-07 19:54:36 -04:00
Clinton Gormley ca12b1f2a6 Tidied up the disk allocator docs 2017-07-06 12:16:53 +02:00
Simon Willnauer 6e5cc424a8 Switch indices read-only if a node runs out of disk space (#25541)
Today when we run out of disk all kinds of crazy things can happen
and nodes are becoming hard to maintain once out of disk is hit.
While we try to move shards away if we hit watermarks this might not
be possible in many situations. Based on the discussion in #24299
this change monitors disk utilization and adds a flood-stage watermark
that causes all indices that are allocated on a node hitting the flood-stage
mark to be switched read-only (with the option to be deleted). This allows users to react on the low disk
situation while subsequent write requests will be rejected. Users can switch
individual indices read-write once the situation is sorted out. There is no
automatic read-write switch once the node has enough space. This requires
user interaction.

The flood-stage watermark is set to `95%` utilization by default.

Closes #24299
2017-07-05 22:18:23 +02:00
Simon Willnauer 1cae850cf5 Add a cluster block that allows to delete indices that are read-only (#24678)
Today when an index is `read-only` the index is also blocked from
being deleted which sometimes is undesired since in-order to make
changes to a cluster indices must be deleted to free up space. This is
a likely scenario in a hosted environment when disk-space is limited to switch
indices read-only but allow deletions to free up space.
2017-05-16 17:34:37 +02:00
Boaz Leskes 70a3ac1767 Add a note about `cluster.routing.allocation.node_concurrent_recoveries` (#23160)
Closes #23152
2017-02-14 14:14:41 +02:00
Ali Beyad 51bfecc7cb [DOCS] fixes word usage in allocation awareness docs 2016-11-25 11:47:40 -05:00
Joshua Rich 63f484ffa3 Docs: Cluster Allocation Filtering
Put more emphasis on the fact that multiple values can be specified
and move examples after explanation of settings.
2016-10-17 11:02:33 +11:00
Nik Everett 3ed3e5e660 Convert more docs to CONSOLE
* plugins/discovery-azure-class.asciidoc
* reference/cluster.asciidoc
* reference/modules/cluster/misc.asciidoc
* reference/modules/indices/request_cache.asciidoc

After this is merged there will be no unconvereted snippets outside
of `reference`.

Related to #18160
2016-09-21 09:36:21 -04:00
Ali Beyad f608e6c6cf Improves the documentation for the (#20531)
`cluster.routing.allocation.cluster_concurrent_rebalance` setting,
clarifying in which shard allocation situations the rebalance limit
takes effect.

Closes #20529
2016-09-16 16:06:18 -04:00
Jason Tedor 412c61c402 Update logger names in docs
In 7560101ec7, the Elasticsearch logger
names were modified to be their fully-qualified class name (with some
exceptions for special loggers like the slow logs and the transport
tracer). This commit updates the docs accordingly.

Relates #20475
2016-09-14 08:08:49 -04:00
Brandon Wulf 6b7d40929c Switch example from inclusion to exclusion.
Page is explaining allocation exclusion- example should be about exclusion as well.
2016-07-28 21:54:22 -04:00
Jason Tedor c257e2c51f Remove settings and system properties entanglement
Today when parsing settings during bootstrap, we add a system property
for every Elasticsearch setting. Additionally, settings can be set via
system properties. This commit simplifies this situation.
 - settings are no longer propogated to system properties
 - system properties can not be used to set settings
 - the "es." prefix on settings is no longer required (nor permitted)
 - test logging has a dedicated system property (tests.logger.level)

Relates #18198
2016-05-19 14:08:08 -04:00
Clinton Gormley 3f594089c2 Renamed all AUTOSENSE snippets to CONSOLE (#18210) 2016-05-09 15:42:23 +02:00
Nik Everett 4b1c116461 Generate and run tests from the docs
Adds infrastructure so `gradle :docs:check` will extract tests from
snippets in the documentation and execute the tests. This is included
in `gradle check` so it should happen on CI and during a normal build.

By default each `// AUTOSENSE` snippet creates a unique REST test. These
tests are executed in a random order and the cluster is wiped between
each one. If multiple snippets chain together into a test you can annotate
all snippets after the first with `// TEST[continued]` to have the
generated tests for both snippets joined.

Snippets marked as `// TESTRESPONSE` are checked against the response
of the last action.

See docs/README.asciidoc for lots more.

Closes #12583. That issue is about catching bugs in the docs during build.
This catches *some* bugs in the docs during build which is a good start.
2016-05-05 13:58:03 -04:00
Ali Beyad 67c0734bf3 Update misc.asciidoc
Added documentation for the cluster.indices.tombstones.size property for maximum tombstones in the cluster state.
2016-05-04 15:21:47 -04:00
Simon Willnauer ad24653948 update allocation_awareness.asciidoc to also use 'node.attr' namespace 2016-03-30 13:52:45 +02:00
Jason Tedor 8a05c2a2be Bootstrap does not set system properties
Today, certain bootstrap properties are set and read via system
properties. This action-at-distance way of managing these properties is
rather confusing, and completely unnecessary. But another problem exists
with setting these as system properties. Namely, these system properties
are interpreted as Elasticsearch settings, not all of which are
registered. This leads to Elasticsearch failing to startup if any of
these special properties are set. Instead, these properties should be
kept as local as possible, and passed around as method parameters where
needed. This eliminates the action-at-distance way of handling these
properties, and eliminates the need to register these non-setting
properties. This commit does exactly that.

Additionally, today we use the "-D" command line flag to set the
properties, but this is confusing because "-D" is a special flag to the
JVM for setting system properties. This creates confusion because some
"-D" properties should be passed via arguments to the JVM (so via
ES_JAVA_OPTS), and some should be passed as arguments to
Elasticsearch. This commit changes the "-D" flag for Elasticsearch
settings to "-E".
2016-03-13 20:09:15 -04:00
Clinton Gormley d5f8f92559 Update shards_allocation.asciidoc
Closes https://github.com/elastic/elasticsearch/issues/16554
2016-02-13 15:16:42 +01:00
Dongjoon Hyun 21ea552070 Fix typos in docs. 2016-02-09 02:07:32 -08:00
Simon Willnauer f5e4cd4616 Remove recovery threadpools and throttle outgoing recoveries on the master
Today we throttle recoveries only for incoming recoveries. Nodes that have a lot
of primaries can get overloaded due to too many recoveries. To still keep that at bay
we limit the number of threads that are sending files to the target to overcome this problem.

The right solution here is to also throttle the outgoing recoveries that are today unbounded on
the master and don't start the recovery until we have enough resources on both source and target nodes.

The concurrency aspects of the recovery source also added a lot of complexity and additional threadpools
that are hard to configure. This commit removes the concurrent streamns notion completely and sends files
in the thread that drives the recovery simplifying the recovery code considerably.
Outgoing recoveries are not throttled on the master via a allocation decider.
2015-12-22 14:59:43 +01:00
Yannick Welsch 3a442db9bd Allocate primary shards based on allocation ids
Closes #15281
2015-12-17 15:55:50 +01:00
Masaru Hasegawa 5ae00a6129 Take initializing shards into consideration during awareness allocation
It makes decision consistent.
Fixes #12522
2015-09-11 13:13:36 +09:00
Simon Willnauer 66b78341e4 Add note about multi data path and disk threshold deciders
Prior to 2.0 we summed up the available space on all disk on a node
due to the raid-0 like behavior. Now we don't do this anymore and use the
min & max disk space to make decisions.

Closes #13106
2015-08-31 16:23:54 +02:00
Clinton Gormley 2b512f1f29 Docs: Use "js" instead of "json" and "sh" instead of "shell" for source highlighting 2015-07-14 18:14:09 +02:00
Boaz Leskes 41f8c96fed Docs: clarification of allocation awareness w.r.t. rack failures
Closes #11908
2015-06-29 11:57:32 +02:00
Clinton Gormley f123a53d72 Docs: Refactored modules and index modules sections 2015-06-22 23:49:45 +02:00